I study machine learning and the brain.
The computational principles of the brain have been partially illuminated by studying how it keeps track of physical space. When a mammal performs a spatial task, large portions of the brain are devoted to tracking spatial variables like locations and orientations, and they do so in ways that are surprisingly simple and clever. When performing a non-spatial task, those same neurons again devote themselves to tracking relevant details, but in ways that are less understood. This raises a natural question: what is the "base algorithm" here? Is this neural circuitry essentially a space processor, handling many different scenarios by treating them as spatial tasks? Or, at its core, is it some other type of processor that is naturally capable of handling physical space? By exploring the set of possible neural algorithms that fit the past few decades' experimental results, we may gain insight into how today's AI can be improved. And by evaluating how these possible neural algorithms can be used by today's AI, we may gain insight into how the brain makes sense of the world.
This is my main project at Numenta. (Fun fact: We live-stream most of our research meetings.)