Thursday, April 14, 2011

Hierarchical temporal memory is the core of our minds

This new technology is as important as the transistor.
Starting Nov 23, 2010 I dedicated myself full time to understanding and optimizing HTM algorithms.
Other interests include cortical loops, covert, overt attention, goal selection and motor control.
HTM provides both scientists, philosophers and biologists a modern method of practical research on your table top.

Understanding how to format data and setup HTM regions and additional algorithms around HTM requires extremely clear and visual understanding of how patterns in space and time are represented in cortical algorithms.

The founders of Numenta believe biological principles will drive the next generation of intelligent computing. Numenta aims to be a catalyst for this next generation of computing by creating a new technology along with a flexible application development platform.

At the heart of Numenta's software are learning algorithms that discover the temporal structure in a stream of data.  HTMs learn about objects in the world in the same way that people do, through a stream of sensory data. By modeling the pattern discovery mechanisms of the human brain, HTMs offer a means to solve pattern recognition and prediction problems in messy, large, real world data sets. Example application areas include image and audio recognition, failure prediction in complex systems, web click prediction, fraud detection, and semantic analysis of text. The underlying learning algorithms used in HTMs are not specific to particular sensory domains and thus can be applied to a broad set of problems that involve modeling complex sensory data.

The HTM cortical learning algorithms and the corresponding biological theory perform the following functions.
  1. Convert input patterns into sparse distributed representations.
    The brain represents patterns through the activation of sets of cells, in a way that is described mathematically as a "Sparse Distributed Representation." Sparse distributed representations have many desirable qualities, including robustness to noise, high capacity, and the ability to simultaneously encode multiple meanings. The HTM cortical learning algorithms take advantage of these properties.
  2. Learn common transitions between sparse distributed representations.
    The neocortex learns by observing streams of temporal data, i.e. "movies" as opposed to "snapshots". When exposed to streams of sensory data the cortical learning algorithms remember transitions between patterns in the input stream. The transitions that occur again and again are reinforced; the transitions that do not occur again are forgotten. In the neocortex, this memory of transitions corresponds to the lateral connections between cells in a layer of a region.
  3. Predict likely future events.
    As described in On Intelligence, prediction is a key feature of human intelligence, and as we observe our environment we continuously predict what will happen next. When exposed to a sensory input, the HTM cortical learning algorithms use the previously learned transitions to make a prediction of likely future inputs. The prediction can be massively parallel or highly specific based on the learned transitions.
  4. Send predictions to the next level in the hierarchy.
    The neocortex is organized in a hierarchy of levels, where information (e.g. signals from the retina) comes into the lowest level, propagates to a higher level, etc. The HTM cortical learning algorithms operate at each level in the hierarchy. At each level the predicted patterns are combined, and this union of predictions becomes the output of a level in the HTM. The next level takes this input and turns it back into a sparse distributed representation. Forming a union of predictions is equivalent to a many-to-one mapping, and it leads to increased stability as you ascend the hierarchy. Both of these properties are required in hierarchical models.

The HTM cortical learning algorithms model the behavior of a layer of cells in the neocortex, but they also exhibit a number of mathematical properties that are recognized as being important for machine learning.
  • High capacity
    Sparse distributed representations comprised of just a few thousand bits can represent a very large number of distinct entities.
  • Robustness to noise
    Sparse distributed representations and the HTM cortical learning algorithms are highly resistant to noise and occlusions. Performance degrades slowly with noise.
  • On-line learning
    The HTM cortical learning algorithms can learn on-line, meaning they can learn while doing inference. Brains can learn all the time. On-line learning is important for applications where the statistics can change over time.
  • Variable order sequence memory and prediction
    "Variable order" means that sequences can be of varying lengths. Sometimes you need to go back a long way in time to make a prediction and sometimes you only need to go back a tiny bit in time. The HTM cortical learning algorithms automatically learn the variable order statistics in the data and will adapt if those statistics change. The cortical learning algorithms achieve variable order memory by modeling the columnar nature of cells in a layer of the neocortex. Cells in a column have similar feed forward properties but vary in their response in the context of different sequences.
  • Sub-sampling
    An important property of sparse distributed representations is that knowing only a few active bits of a representation is almost as good as knowing all of them. Nowhere in the HTM cortical learning algorithms do we store copies of entire patterns. Learning is based on small subsamples of patterns that, among other things, enable new means of generalization. These sub-samples of patterns correspond to the sets of synapses that form within an integrative region of a dendrite on a neuron.

http://numenta.com/htm-overview/htm-algorithms.php

NUMENTA PDF and VIDEOS:
http://numenta.com/htm-overview/education.php


Teddybots training videos:
http://www.youtube.com/user/htmtutor

1 comment:

  1. Hi Teddy,

    Have you seen the new open-source initiative regarding HTM theory and algorithms?

    http://forum.numenta.com/viewtopic.php?f=6&t=1550&sid=7656a0a0555d6763b35c40b7a35d9624

    Please, contact us if you have interest,

    Thanks, David
    PS: My email is david_ragazzi at hot mail

    ReplyDelete