Hey, I'm Marcus.

I study how the brain might work.


How do you describe the novel in terms of the familiar, in such a way that proves useful?

This question comes up in a lot of AI domains. If a vision system can describe a novel object as an arrangement of familiar parts, then it can predict what that object will look like from other viewpoints, and hence can comprehend new objects from a few quick observations. If a robot can describe a novel environment using elements of other environments, then its behaviors can be informed by previous experiences from those other environments.

Long-term, I want to understand how the brain solves this problem. In the shorter term, I want to understand the bag of tricks that the brain plausibly might use.

One essential tool from machine learning is distributed representations. By learning to map inputs to a high-dimensional vector, artificial neural networks learn a function for describing novelty, and these output vectors can prove useful for classification, prediction over time, or behavior generation. Another potentially essential tool is taking input and describing it using a variable-sized graph, where each node and edge has its own distributed representation, and each node signifies a part. This latter approach is powerful but requires more engineering. What is possible with these tools? Are other tools needed?

This set of questions is my playground.

Research artifacts

Hippocampal Spatial Mapping As Fast Graph Learning
Marcus Lewis
Poster at 30th Annual Computational Neuroscience Meeting (2021)
Efficient and flexible representation of higher-dimensional cognitive variables with grid cells
Mirko Klukas, Marcus Lewis, Ila Fiete
PLOS Computational Biology (2020)
A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex
Jeff Hawkins, Marcus Lewis, Mirko Klukas, Scott Purdy, Subutai Ahmad
Front. Neural Circuits (2019)
Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells
Marcus Lewis, Scott Purdy, Subutai Ahmad, Jeff Hawkins
Front. Neural Circuits (2019)

Other projects, big and small

Using Grid Cells for Coordinate Transforms
Marcus Lewis
Poster, Grid Cell Meeting 2018, UCL, London, England
Grid cells: Visualizing the CAN model
A weekend in April 2017
See HTM run: Stacks of time series
Written while living in hostels. February 2016
A visual running environment for HTM
Collaboration with Felix Andrews. November 2015

All blog posts

Some "Causal Inference" intuition

Grid cells: Visualizing the CAN model

Appendix: The classic TP

The Life and Times of a Dendrite Segment

The column SDR that wasn't random enough

HTM time series: Column overlaps and boosting

See your HTM run: Stacks of time series

HTM time series: Now add synapse learning

HTM time series: Two charts, one scale

HTM time series: Number of dendrite segments

HTM time series: Frequencies of column states

How many coin-flips till heads?

Three snapshots of Temporal Pooling

Data flow: Visualizing big remote things, Part 2

Data flow: Visualizing big remote things

Using B├ęzier curves as easing functions

Om Internals: Instances are getting reused. How?


All posts | Twitter | Google Scholar | GitHub | LinkedIn | Numenta Research Meeting Videos