I work on AI safety, applying my background in computational cognitive neuroscience.
I did my PhD in cognitive psychology and neuroscience 1999-2006 at CU Boulder. I was a researcher there until 2014, when I became CEO of eCortex, Inc, a small neuroscience research company.
I worked with neural network models as a tool for making theories about how the brain works as an information-processing system. I worked with models of skill and knowledge learning, episodic memory, vision, and working memory for executive function. My research goal was to understand how these systems work together to accomplish complex cognition.
But I was increasingly concerned with how this knowledge might be applied to build strong artificial intelligence before we were prepared to do it safely. So in 2022 I shut down eCortex and transitioned to working on the problem of alignment.
I’m now a research fellow at the Astera Institute. I study how we can align a general intelligence so that it wants to do what we want, even when it acts independently and is more intelligent than we are. The problem is complex, with different aspects of the problem constraining solutions to the others. Thus, I have written primarily about technical alignment, but ranged into the societal and political pressures surrounding the creation of strong AI, and how to communicate about the situation to the public.
My work on AI risks organized by topic
Curriculum Vitae: my academic neuroscience career.