Jump to content

Hierarchical temporal memory

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Spike Wilbury (talk | contribs) at 12:22, 29 October 2013 (Neocognitron: ce and fix citations; no self-referencing). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Hierarchical temporal memory (HTM) is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex. HTM is a biomimetic model based on the memory-prediction theory of brain function described by Jeff Hawkins in his book On Intelligence. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world.

Jeff Hawkins states that HTM does not present any new idea or theory, but combines existing ideas to mimic the neocortex with a simple design that provides a large range of capabilities. HTM combines and extends approaches used in Bayesian networks, spatial and temporal clustering algorithms, while using a tree-shaped hierarchy of nodes that is common in neural networks.

HTM structure and algorithms

An example tree-shaped HTM hierarchy with three levels used for image recognition
An example of HTM hierarchy used for image recognition

A typical HTM network is a tree-shaped hierarchy of levels that are composed of smaller elements called nodes or columns. A single level in the hierarchy is also called a region. Higher hierarchy levels often have fewer nodes and therefore less spacial resolvability. Higher hierarchy levels can reuse patterns learned at the lower levels by combining them to memorize more complex patterns.

Each HTM node has the same basic functionality. In learning and inference modes, sensory data comes into the bottom level nodes. In generation mode, the bottom level nodes output the generated pattern of a given category. The top level usually has a single node that stores the most general categories (concepts) which determine, or are determined by, smaller concepts in the lower levels which are more restricted in time and space. When in inference mode, a node in each level interprets information coming in from its child nodes in the lower level as probabilities of the categories it has in memory.

Each HTM region learns by identifying and memorizing spatial patterns - combinations of input bits that often occur at the same time. It then identifies temporal sequences of spatial patterns that are likely to occur one after another.

Zeta 1: first generation node algorithms

During training, a node receives a temporal sequence of spatial patterns as its input. The learning process consists of two stages:

  1. Spatial pooling identifies frequently observed patterns and memorizes them as coincidences. Patterns that are significantly similar to each other are treated as the same coincidence. A large number of possible input patterns are reduced to a manageable number of known coincidences.
  2. Temporal pooling partitions coincidences that are likely to follow each other in the training sequence into temporal groups. Each group of patterns represents a "cause" of the input pattern (or "name" in On Intelligence).

During inference (recognition), the node calculates the set probabilities that a pattern belongs to each known coincidence. Then it calculates the probabilities that the input represents each temporal group. The set of probabilities assigned to the groups is called a node's "belief" about the input pattern. (In a simplified implementation, node's belief consists of only one winning group). This belief is the result of the inference that is passed to one or more "parent" nodes in the next higher level of the hierarchy.

"Unexpected" patterns to the node do not have a dominant probability of belonging to any one temporal group, but have nearly equal probabilities of belonging to several of the groups. If sequences of patterns are similar to the training sequences, then the assigned probabilities to the groups will not change as often as patterns are received. The output of the node will not change as much, and a resolution in time is lost.

In a more general scheme, the node's belief can be sent to the input of any node(s) in any level(s), but the connections between the nodes are still fixed. The higher-level node combines this output with the output from other child nodes thus forming its own input pattern.

Since resolution in space and time is lost in each node as described above, beliefs formed by higher-level nodes represent an even larger range of space and time. This is meant to reflect the organization of the physical world as it is perceived by human brain. Larger concepts (e.g. causes, actions and objects) are perceived to change more slowly and consist of smaller concepts that change more quickly. Jeff Hawkins postulates that brains evolved this type of hierarchy to match, predict, and affect the organization of the external world.

More details about the functioning of Zeta 1 HTM can be found in Numenta's old documentation.[1]

Cortical learning algorithms

The new generation of HTM learning algorithms relies on fixed-sparsity distributed representations.[2][3] It models cortical columns that tend to inhibit neighboring columns in the neocortex thus creating a sparse activation of columns. A region creates a sparse representation from its input, so that a fixed percentage of columns are active at any one time.

Each HTM region consists of a number of highly interconnected cortical columns. A region is similar to layer III of the neocortex. A cortical column is understood as a group of cells that have the same receptive field. Each column has a number of cells that are able to remember several previous states. A cell can be in one of three states: active, inactive and predictive state.

Spatial pooling: The receptive field of each column is a fixed number of inputs that are randomly selected from a much larger number of node inputs. Based on the input pattern, some columns will receive more active input values. Spatial pooling selects a relatively constant number of the most active columns and inactivates (inhibits) other columns in the vicinity of the active ones. Similar input patterns tend to activate a stable set of columns. The amount of memory used by each region can be increased to learn more complex spatial patterns or decreased to learn simpler patterns.

Representing the input in the context of previous inputs: If one or more cells in the active column are in the predictive state (see below), they will be the only cells to become active in the current time step. If none of the cells in the active column are in the predictive state (during the initial time step or when the activation of this column was not expected), all cells are made active.

Predicting future inputs and temporal pooling: When a cell becomes active, it gradually forms connections to nearby cells that tend to be active during several previous time steps. Thus a cell learns to recognize a known sequence by checking whether the connected cells are active. If a large number of connected cells are active, this cell switches to the predictive state in anticipation of one of the few next inputs of the sequence. The output of a region includes columns in both active and predictive states. Thus columns are active over longer periods of time, which leads to greater temporal stability seen by the parent region.

Cortical learning algorithms are able to learn continuously from each new input pattern, therefore no separate inference mode is necessary. During inference, HTM tries to match the stream of inputs to fragments of previously learned sequences. This allows each HTM region to be constantly predicting the likely continuation of the recognized sequences. The index of the predicted sequence is the output of the region. Since predictions tend to change less frequently than the input patterns, this leads to increasing temporal stability of the output in higher hierarchy levels. Prediction also helps to fill in missing patterns in the sequence and to interpret ambiguous data by biasing the system to infer what it predicted.

Cortical learning algorithms are currently being offered as a service in private beta called Grok by Numenta.[4]

The following question was posed to Jeff Hawkins September 2011 with regard to Cortical learning algorithms: "How do you know if the changes you are making to the model are good or not?" To which Jeff's response was "There are two categories for the answer: one is to look at neuroscience, and the other is methods for machine intelligence. In the neuroscience realm there are many predictions that we can make, and those can be tested. If our theories explain a vast array of neuroscience observations then it tells us that we’re on the right track. In the machine learning world they don’t care about that, only how well it works on practical problems. In our case that remains to be seen. To the extent you can solve a problem that no one was able to solve before, people will take notice."[5]

Comparing HTM and neocortex

Comparing high-level structures and functionality of neocortex with HTM is most appropriate. HTM attempts to implement the functionality that is characteristic of a hierarchically related group of cortical regions in the neocortex. A region of the neocortex corresponds to one or more levels in the HTM hierarchy, while the hippocampus is remotely similar to the highest HTM level. A single HTM node may represent a group of cortical columns within a certain region.

Although it is primarily a functional model, several attempts have been made to relate the algorithms of the HTM with the structure of neuronal connections in the layers of neocortex.[6][7] The neocortex is organized in vertical columns of 6 horizontal layers. The 6 layers of cells in the neocortex should not be confused with levels in an HTM hierarchy.

HTM nodes attempt to model a portion of cortical columns (80 to 100 neurons) with approximately 20 HTM "cells" per column. HTMs model only layers 3 and 4 to detect spatial and temporal features of the input with 1 cell per column in layer 4 for spatial "pooling", and 1 to 2 dozen per column in layer 3 for temporal pooling. A key to HTMs and the cortex's is their ability to deal with noise and variation in the input which is a result of using a "sparse distributive representation" where only about 2% of the columns are active at any given time.

An HTM attempts to model a portion of the cortex's learning and plasticity as described above. Differences between HTMs and neurons include:[8]

  • strictly binary signals and synapses
  • no direct inhibition of synapses or dendrites (but simulated indirectly)
  • only models layers 3 and 4 (no 1, 5, or 6)
  • no "motor" control (layer 5)
  • no feedback from a higher level's layer 6 that goes back to a lower level's layer 1
  • only 1 cell for layer 4
  • no lateral connections in layer 4
  • layer 2 is assumed to be included in layer 3.

Similarity to other models

Bayesian networks

Likened to a Bayesian network, an HTM comprises a collection of nodes that are arranged in a tree-shaped hierarchy. Each node in the hierarchy discovers an array of causes in the input patterns and temporal sequences it receives. A Bayesian belief revision algorithm is used to propagate feed-forward and feedback beliefs from child to parent nodes and vice versa. However, the analogy to Bayesian networks is limited, because HTMs can be self-trained (such that each node has an unambiguous family relationship), cope with time-sensitive data, and grant mechanisms for covert attention.[9]

A theory of hierarchical cortical computation based on Bayesian belief propagation was proposed earlier by Tai Sing Lee and David Mumford.[10] While HTM is mostly consistent with these ideas, it adds details about handling invariant representations in the visual cortex.[11]

Neural networks

Like any system that models details of the neocortex, HTM can be viewed as an artificial neural network. The tree-shaped hierarchy commonly used in HTMs resembles the usual topology of traditional neural networks. HTMs attempt to model cortical columns (80 to 100 neurons) and their interactions with fewer HTM "neurons". The goal of current HTMs is to capture as much of the functions of neurons and the network (as they are currently understood) within the capability of typical computers and in areas that can be made readily useful such as image processing. For example, feedback from higher levels and motor control are not attempted because it is not yet understood how to incorporate them and binary instead of variable synapses are used because they were determined to be sufficient in the current HTM capabilities.

LAMINART and similar neural networks researched by Stephen Grossberg attempt to model both the infrastructure of the cortex and the behavior of neurons in a temporal framework to explain neurophysiological and psychophysical data. However, these networks are, at present, too complex for realistic application.[12]

HTM is also related to work by Tomaso Poggio, including an approach for modeling the ventral stream of the visual cortex known as HMAX. Similarities of HTM to various AI ideas are described in the December 2005 issue of the Artificial Intelligence journal.[13]

Neocognitron

Neocognitron, a hierarchical multilayered neural network proposed by Professor Kunihiko Fukushima in 1987, is one of the first Deep Learning Neural Networks models.[14]

Deep Learning

Recent connectionist-like so-called "Deep Learning" models are very similar to HTM, and these have similarly been linked to developmental theories of the human neocortex developed in the early 1990s by neuroscientists.[5][15]

NuPIC platform and development tools

(Links to legacy Numenta contents are broken use this and this to access them.)

The HTM model has been implemented in a research release of a software API called "Numenta Platform for Intelligent Computing" (NuPIC). Currently, the software is available as a free download and can be licensed for general or academic research as well as for developing commercial applications. NuPIC is written in C++ and Python.

A number of HTM software development tools have been implemented using NuPIC:

  • Numenta Vision Toolkit - allows to create a customized image recognition system. It assists in collecting and preparing images for training, trains the HTM network and recognizes new images. Vision Toolkit can also optimize network training parameters by selecting one of the predefined network configurations that were found to work well with certain image types.
  • Vitamin D Toolkit (discontinued) - provides a set of visual tools to inspect network configuration, find recognition problems and fine-tune network parameters.
  • Numenta Prediction Toolkit (future) - is planned to include tools for simple development of general-purpose HTM networks.

Applications

The following commercial applications have been developed using NuPIC:

  • Vitamin D Video - a video surveillance application that uses HTM to detect people in video by differentiating them from other moving objects.
  • EDSA power analytics system [16] - an electrical power analytics, supervision and diagnostic system scheduled to be deployed in an oil field in the North Sea. It uses HTM to learn and distinguish between “routine” and “non-routine” events in an electrical power network. The system alerts an operator when a situation is not normal.
  • Lockheed Martin has been using and modifying HTM technology for several applications such as integrating multiple types of sensory inputs and object recognition from geospatial imagery of an urban environment.[17]
  • iResemble [18] - an iPhone application implemented using the Vision Toolkit. It has a trained HTM network that classifies a submitted photo and outputs a belief of what type of person the photo resembles.

See also

References

  1. ^ http://web.archive.org/web/20090527174304/http://numenta.com/for-developers/education/general-overview-htm.php
  2. ^ Jeff Hawkins lecture describing cortical learning algorithms
  3. ^ "New Insights from Neuroscience" (PDF). Retrieved 26 November 2012.
  4. ^ http://www.numenta.com/grok_info.html
  5. ^ a b From Neural Networks to Deep Learning: Zeroing in on the Human Brain
  6. ^ Jeff Hawkins, Sandra Blakeslee "On Intelligence"
  7. ^ Towards a Mathematical Theory of Cortical Micro-circuits. Dileep George and Jeff Hawkins. PLoS Computational Biology 5(10)
  8. ^ https://www.groksolutions.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf
  9. ^ Hawkins' Blog
  10. ^ Tai Sing Lee, David Mumford "Hierarchical Bayesian Inference in the Visual Cortex", 2002
  11. ^ http://dileepgeorge.com/blog/?p=5
  12. ^ Grossberg, S. (2007). Towards a unified theory of neocortex: Laminar cortical circuits for vision and cognition. Technical Report CAS/CNS-TR-2006-008. For Computational Neuroscience: From Neurons to Theory and Back Again, eds: Paul Cisek, Trevor Drew, John Kalaska; Elsevier, Amsterdam, pp. 79-104. http://cns.bu.edu/Profiles/Grossberg/GroCisek2007.pdf
  13. ^ ScienceDirect - Artificial Intelligence, Volume 169, Issue 2, Page 103-212 (December 2005)
  14. ^ Neocognitron at Scholarpedia
  15. ^ J. Elman, et al. (1996) Rethinking Innateness. MIT Press.
  16. ^ http://www.edsa.com/pa_articles/self_learning.php
  17. ^ http://www.atl.external.lmco.com/papers/1597.pdf
  18. ^ https://www.appstorehq.com/iresemble-iphone-140379/app

Official

Other