Hey, I'm Marcus.

I study how the brain might work.

Research

How do you describe the novel in terms of the familiar, in such a way that proves useful?

This question comes up in a lot of AI domains. If a vision system can describe a novel object as an arrangement of familiar parts, then it can predict what that object will look like from other viewpoints, and hence can comprehend new objects from a few quick observations. If a robot can describe a novel environment using elements of other environments, then its behaviors can be informed by previous experiences from those other environments.

Long-term, I want to understand how the brain solves this problem. In the shorter term, I want to understand the bag of tricks that the brain plausibly might use.

One essential tool from machine learning is distributed representations. By learning to map inputs to a high-dimensional vector, artificial neural networks learn a function for describing novelty, and these output vectors can prove useful for classification, prediction over time, or behavior generation. Another potentially essential tool is taking input and describing it using a variable-sized graph, where each node and edge has its own distributed representation, and each node signifies a part. This latter approach is powerful but requires more engineering. What is possible with these tools? Are other tools needed?

This set of questions is my playground.

Research artifacts

Hippocampal Spatial Mapping As Fast Graph Learning
Marcus Lewis
Poster at 30th Annual Computational Neuroscience Meeting (2021)
Efficient and flexible representation of higher-dimensional cognitive variables with grid cells
Mirko Klukas, Marcus Lewis, Ila Fiete
PLOS Computational Biology (2020)
A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex
Jeff Hawkins, Marcus Lewis, Mirko Klukas, Scott Purdy, Subutai Ahmad
Front. Neural Circuits (2019)
Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells
Marcus Lewis, Scott Purdy, Subutai Ahmad, Jeff Hawkins
Front. Neural Circuits (2019)

Other projects, big and small

Using Grid Cells for Coordinate Transforms
Marcus Lewis
Poster, Grid Cell Meeting 2018, UCL, London, England
Grid cells: Visualizing the CAN model
A weekend in April 2017
See HTM run: Stacks of time series
Written while living in hostels. February 2016
A visual running environment for HTM
Collaboration with Felix Andrews. November 2015

All blog posts

Some "Causal Inference" intuition
2021-11-04

Grid cells: Visualizing the CAN model
2017-04-16

Appendix: The classic TP
2016-05-05

The Life and Times of a Dendrite Segment
2016-04-28

The column SDR that wasn't random enough
2016-03-31

HTM time series: Column overlaps and boosting
2016-03-13

See your HTM run: Stacks of time series
2016-02-15

HTM time series: Now add synapse learning
2016-02-14

HTM time series: Two charts, one scale
2016-02-13

HTM time series: Number of dendrite segments
2016-01-31

HTM time series: Frequencies of column states
2016-01-16

How many coin-flips till heads?
2015-12-03

Three snapshots of Temporal Pooling
2015-10-12

Data flow: Visualizing big remote things, Part 2
2015-07-30

Data flow: Visualizing big remote things
2015-07-06

Using B├ęzier curves as easing functions
2015-02-26

Om Internals: Instances are getting reused. How?
2015-01-31

More

All posts | Twitter | Google Scholar | GitHub | LinkedIn | Numenta Research Meeting Videos