Sunday, April 17, 2011

Teddybots Botbattle

Botbattle"Botbattle is a worldwide arena for building your own bots and watching them destory other bots online. The goal at Botbattle.com is to advance bot programming technology and teach programming methods using a fun mutiplayer game."



 Program a robot to play against other robots.
Watch it play a few rounds.
Update your code to better handle the battles.

Computer updates its robots code to better handle your bot.
How is this done?
From your perspective it is irrelevant whether the computer is making opponent bots smarter or whether other people are.
Other users are doing the same as you.
To each user it appears that the bots become more intelligent and adapt.


Thursday, April 14, 2011

Speed optimization of HTM Psuedocode

The sample code as published loops through every cell twice in order to determine if it will be predictive or active.

Each cell stores small lists of other cells which if were active may be used to set this cell in predict mode.
After activating a subset of cells we must go through all cells to calculate the values to the small lists.
The small lists designate which cells to base prediction from.

The following example is highly visual and depends on being very familiar with how activations and predictions are carried out over several cells.

The current algorithm calculates prediction in a way which  is analogous to following an explosions fallout backwards.One must look at each fragment on the ground to determine if they came from a larger chunk landing somewhere else. The problem with this is every point on the floor must be scanned for pieces.
As the square feet increases to billions of inches we have speed issues on non-parallel processors.

Under development is an algorithm which is able to follow a large chunk hitting the ground and breaking into smaller pieces. Where the smaller pieces land is what is added to the area where the large chunk hit. This gives us a list of positions at each landing point. This list tells us where the smaller fragments will go. During runtime the computer recursively calls out these points and increments a simpler version of individual segment update lists.

Psuedocode for the fast algorithm is in the works and as follows:
For each cell on plist set to predict if past threshold.

For each column on alist set all cells active or set cells in predict mode active.
-when each cell is set active
--point cells on WASactive list to this one
--jump to its connections and increment their predict and add to plist2.
--add the active cell to the active list (This is also the learncell)

For each cell on plist inc if over threshold indexes

Copy plist2 to plist, clear plist2
Copy alist to wasactive

Update: it evolved into something with nodes having two segments.

Hierarchical temporal memory is the core of our minds

This new technology is as important as the transistor.
Starting Nov 23, 2010 I dedicated myself full time to understanding and optimizing HTM algorithms.
Other interests include cortical loops, covert, overt attention, goal selection and motor control.
HTM provides both scientists, philosophers and biologists a modern method of practical research on your table top.

Understanding how to format data and setup HTM regions and additional algorithms around HTM requires extremely clear and visual understanding of how patterns in space and time are represented in cortical algorithms.

The founders of Numenta believe biological principles will drive the next generation of intelligent computing. Numenta aims to be a catalyst for this next generation of computing by creating a new technology along with a flexible application development platform.

At the heart of Numenta's software are learning algorithms that discover the temporal structure in a stream of data.  HTMs learn about objects in the world in the same way that people do, through a stream of sensory data. By modeling the pattern discovery mechanisms of the human brain, HTMs offer a means to solve pattern recognition and prediction problems in messy, large, real world data sets. Example application areas include image and audio recognition, failure prediction in complex systems, web click prediction, fraud detection, and semantic analysis of text. The underlying learning algorithms used in HTMs are not specific to particular sensory domains and thus can be applied to a broad set of problems that involve modeling complex sensory data.

The HTM cortical learning algorithms and the corresponding biological theory perform the following functions.
  1. Convert input patterns into sparse distributed representations.
    The brain represents patterns through the activation of sets of cells, in a way that is described mathematically as a "Sparse Distributed Representation." Sparse distributed representations have many desirable qualities, including robustness to noise, high capacity, and the ability to simultaneously encode multiple meanings. The HTM cortical learning algorithms take advantage of these properties.
  2. Learn common transitions between sparse distributed representations.
    The neocortex learns by observing streams of temporal data, i.e. "movies" as opposed to "snapshots". When exposed to streams of sensory data the cortical learning algorithms remember transitions between patterns in the input stream. The transitions that occur again and again are reinforced; the transitions that do not occur again are forgotten. In the neocortex, this memory of transitions corresponds to the lateral connections between cells in a layer of a region.
  3. Predict likely future events.
    As described in On Intelligence, prediction is a key feature of human intelligence, and as we observe our environment we continuously predict what will happen next. When exposed to a sensory input, the HTM cortical learning algorithms use the previously learned transitions to make a prediction of likely future inputs. The prediction can be massively parallel or highly specific based on the learned transitions.
  4. Send predictions to the next level in the hierarchy.
    The neocortex is organized in a hierarchy of levels, where information (e.g. signals from the retina) comes into the lowest level, propagates to a higher level, etc. The HTM cortical learning algorithms operate at each level in the hierarchy. At each level the predicted patterns are combined, and this union of predictions becomes the output of a level in the HTM. The next level takes this input and turns it back into a sparse distributed representation. Forming a union of predictions is equivalent to a many-to-one mapping, and it leads to increased stability as you ascend the hierarchy. Both of these properties are required in hierarchical models.

The HTM cortical learning algorithms model the behavior of a layer of cells in the neocortex, but they also exhibit a number of mathematical properties that are recognized as being important for machine learning.
  • High capacity
    Sparse distributed representations comprised of just a few thousand bits can represent a very large number of distinct entities.
  • Robustness to noise
    Sparse distributed representations and the HTM cortical learning algorithms are highly resistant to noise and occlusions. Performance degrades slowly with noise.
  • On-line learning
    The HTM cortical learning algorithms can learn on-line, meaning they can learn while doing inference. Brains can learn all the time. On-line learning is important for applications where the statistics can change over time.
  • Variable order sequence memory and prediction
    "Variable order" means that sequences can be of varying lengths. Sometimes you need to go back a long way in time to make a prediction and sometimes you only need to go back a tiny bit in time. The HTM cortical learning algorithms automatically learn the variable order statistics in the data and will adapt if those statistics change. The cortical learning algorithms achieve variable order memory by modeling the columnar nature of cells in a layer of the neocortex. Cells in a column have similar feed forward properties but vary in their response in the context of different sequences.
  • Sub-sampling
    An important property of sparse distributed representations is that knowing only a few active bits of a representation is almost as good as knowing all of them. Nowhere in the HTM cortical learning algorithms do we store copies of entire patterns. Learning is based on small subsamples of patterns that, among other things, enable new means of generalization. These sub-samples of patterns correspond to the sets of synapses that form within an integrative region of a dendrite on a neuron.

http://numenta.com/htm-overview/htm-algorithms.php

NUMENTA PDF and VIDEOS:
http://numenta.com/htm-overview/education.php


Teddybots training videos:
http://www.youtube.com/user/htmtutor