I recently stumbled on Jeff Hawkins's theory on the Web and read his book published in 2004. His theory is about how brains work and he purports that his theory would be valuable for building intelligent machines. I came across it belatedly perhaps because I was in a more symbolistic approach until recently. The book is interesting for me partly because I was also enthusiastic on the brain around 1980 when I was a student (like the author).
The following is the gist of his theory written in the book:
- Vernon Mountcastle's principle: all the neocortex regions are performing the same basic operation.
- The neocortex uses stored memories to solve problems.
- The four attributes of the neocortical memory:
- The neocortex stores sequences of patterns.
- The neocortex recalls patterns auto-associatively.
- The neocortex stores patterns in an invariant form.
* This is to say that it recognizes patterns by learning.
- The neocortex stores patterns in a hierarchy.
- The primary function of brains is to make predictions.
- Prediction is key to understand intelligence.
- Each layer of the cortical hierarchy produces invariance (by learning).
- Understanding the function of the micro-layer structure of the cortex is important.
For example, learning temporal sequences can be explained by the micro structure.
- Even sensory cortexes have something to do with action.
- Thalamic feedback would be regarded as delayed feedback for auto-associative memory.
- The feedback from a higher layer produces predictions of detailed patterns in a lower layer.
- The hippocampus is on the top of the layer for the memory of novelty.
- The alternate upward pathway via thalami is for examining unusual patterns.
Though the book gives the outline of the theory, it doesn't say much about implementation. As the work is said to be continuing at the Redwood Center for Theoretical Neuroscience, I should be looking into it.
One direction I might pursue would be to generalize the idea. For example, instead of the idea of intelligence as prediction getting implemented in an artificial neural network, it might be implemented with mathematical prediction models such as HMM.
The author is more interested in creating intelligence we have never seen than just reproducing human functions. That's fine, but building a human-like or animal-like machine would still be a good testbed for building intelligence. Besides, the cortical intelligence might look like yet another implementation of data mining. Or without human functions, especially the linguistic ability, it might be difficult to communicate such intelligent machines. So I may opt for experimenting with perceptually 'anthropomorphic' machines.