AI News, IBM Tests Mobile Computing Pioneer’s Controversial Brain Algorithms

IBM Tests Mobile Computing Pioneer’s Controversial Brain Algorithms

For more than a decade Jeff Hawkins, founder of mobile computing company Palm, has dedicated his time and fortune to a theory meant to explain the workings of the human brain, and provide a blueprint for a powerful new kind of artificial intelligence software.

The algorithms are being tested for tasks including interpreting satellite imagery, and the group is working on designs for computers that would implement Hawkins’s ideas in hardware.

Many researchers have come to focus on a technique called deep learning, which trains multi-layered networks of artificial neurons to find patterns in data (see “10 Breakthrough Technologies 2013: Deep Learning”).

Numenta’s algorithms also operate in a network, but they are aimed at faithfully recreating the behavior of repeating circuits of roughly 100 neurons found in the outer layer of the brain called the neocortex.

“It’s not oversimplified, and not so complicated that there is little chance to ever build a large scale model.” The IBM group is working on using Numenta’s algorithms to analyze satellite imagery of crops, and to spot early warnings signs of mechanical failures in data from pumps or other machinery.

“And so far I have not seen a knock-down argument that they yield better performance in any major challenge area.” Marcus says Hawkins’s algorithms mimic only some of the known mechanisms at work in the brain, and that the majority of its function still remains a mystery.

He has retreated from an earlier plan to make money by marketing Numenta’s first product, software launched in late 2013, called Grok, that looks for anomalies in logs produced by software hosted in the cloud.

IBM peers into Numenta machine intelligence approach

For developers, the company's machine intelligence algorithms , encoders and aplication code are available in an open source project called NuPIC, which stands for Numenta Platform for Intelligent Computing.

Simonite recalled comments made by IBM Research Winfried Wilcke earlier this year when he said experts usually must train machine learning software with example data before it can go to work, while Numenta's algorithms might make it possible to apply machine learning to many more problems.

The Numenta algorithms are aimed at recreating the behavior of repeating circuits of neurons found in the neocortex, the brain's outer surface and place of most of the higher functions of the brain.

Leading the New Era of Machine Intelligence

the principles of intelligence and build machines that work on the same

path to machine intelligence, and creating intelligent machines is important

We are at the beginning of a thrilling new era of computing that will unfold over

the coming decades, and we invite you to learn about how our approach is helping

Hierarchical temporal memory

Hierarchical temporal memory (HTM) is a biologically constrained theory of machine intelligence originally described in the 2004 book On Intelligence[1] by Jeff Hawkins with Sandra Blakeslee.

The top level usually has a single node that stores the most general categories (concepts) which determine, or are determined by, smaller concepts in the lower levels which are more restricted in time and space.

Since resolution in space and time is lost in each node as described above, beliefs formed by higher-level nodes represent an even larger range of space and time.

It relies on sparse distributed representations and a more biologically-realistic neuron model.[8] There are two core components– a spatial pooling algorithm [9] that creates sparse representations and a sequence memory algorithm[5] that learns to represent and predict complex sequences.

Predicting future inputs and temporal pooling: When a cell becomes active, it gradually forms connections to nearby cells that tend to be active during several previous time steps.

To the extent you can solve a problem that no one was able to solve before, people will take notice.'[11] The third generation builds on the second generation and adds in a theory of sensorimotor inference in the neocortex.[12][13] This theory proposes that cortical columns at every level of the hierarchy can learn complete models of objects over time and that features are learned at specific locations on the objects.

Although it is primarily a functional model, several attempts have been made to relate the algorithms of the HTM with the structure of neuronal connections in the layers of neocortex.[14][15] The neocortex is organized in vertical columns of 6 horizontal layers.

HTMs model only layers 2 and 3 to detect spatial and temporal features of the input with 1 cell per column in layer 2 for spatial 'pooling', and 1 to 2 dozen per column in layer 3 for temporal pooling.

A key to HTMs and the cortex's is their ability to deal with noise and variation in the input which is a result of using a 'sparse distributive representation' where only about 2% of the columns are active at any given time.

Differences between HTMs and neurons include:[16] Integrating memory component with neural networks has a long history dating back to early research in distributed representations[17][18] and self-organizing maps.

For example, in sparse distributed memory (SDM), the patterns encoded by neural networks are used as memory addresses for content-addressable memory, with 'neurons' essentially serving as address encoders and decoders.[19][20] Computers store information in 'dense' representations such as a 32 bit word where all combinations of 1s and 0s are possible.

By contrast, brains use sparse distributed representations (SDR).[21] The human neocortex has roughly 100 billion neurons, but at any given time only a small percent are active.

Similarly to SDM developed by NASA in the 80s[19] and vector space models used in Latent semantic analysis, HTM also uses Sparse Distributed Representations.[22] The SDRs used in HTM are binary representations of data consisting of many bits with a small percentage of the bits active (1s);

This leads to the second advantage of SDRs: because the meaning of a representation is distributed across all active bits, similarity between two representations can be used as a measure of semantic similarity in the objects they represent.

The bits in SDRs have semantic meaning, and that meaning is distributed across the bits.[22] The semantic folding theory[23] builds on these SDR properties to propose a new model for language semantics, where words are encoded into word-SDRs and the similarity between terms, sentences and texts can be calculated with simple distance measures.

theory of hierarchical cortical computation based on Bayesian belief propagation was proposed earlier by Tai Sing Lee and David Mumford.[24] While HTM is mostly consistent with these ideas, it adds details about handling invariant representations in the visual cortex.[25] Like any system that models details of the neocortex, HTM can be viewed as an artificial neural network.

The goal of current HTMs is to capture as much of the functions of neurons and the network (as they are currently understood) within the capability of typical computers and in areas that can be made readily useful such as image processing.

For example, feedback from higher levels and motor control are not attempted because it is not yet understood how to incorporate them and binary instead of variable synapses are used because they were determined to be sufficient in the current HTM capabilities.

LAMINART and similar neural networks researched by Stephen Grossberg attempt to model both the infrastructure of the cortex and the behavior of neurons in a temporal framework to explain neurophysiological and psychophysical data.

Hierarchical Temporal Memory (HTM)

Hierarchical Temporal Memory (HTM) is a biologically-constrained theory of intelligence

framework strictly based on neuroscience and the physiology and interaction

of pyramidal neurons in the neocortex of the mammalian brain.

Demo: C++ implementation of Numenta's HTM Cortical Learning Algorithm

This is a demo of my C++ implementation of Numenta's Hierarchical Temporal Memory (HTM) Cortical Learning Algorithm (CLA). The algorithm implementation ...

Introducing the Numenta Platform for Intelligent Computing (NuPIC)

From OSCON 2013. This new open source library is based concepts first described in Jeff Hawkins' book On Intelligence and subsequently developed by ...

HTM Basics (was "CLA Basics")

This is a bit of a dated recording from a former Numenta employee, Rahul Agarwal. However it has a lot of good information about HTM theory. Some of it is now ...

Numenta: Our Story

Numenta - Leading a New Era of Machine Intelligence Numenta has developed a cohesive theory, core software technology, and numerous applications all ...

Learning using a genetic algorithm on a neural network

For more details about the neural network, the programming, click here : (french) This is an ..

Sequence Memory

Jeff Hawkins Numenta OSCON 7/24/2013

via YouTube Capture.

Unsupervised Machine Learning - Hierarchical Clustering with Mean Shift Scikit-learn and Python

This machine learning tutorial covers unsupervised learning with Hierarchical clustering. This is clustering where we allow the machine to determine how many ...

Simple Temporal Memory, Demo 1

Here I explain, and demo a project I have been working on over the summer. My GitHub project: ..