How to make computers that work like the brain
Proceedings of the 46th Annual Design Automation Conference
Classifying Human Body Acceleration Patterns Using a Hierarchical Temporal Memory
AI*IA '09: Proceedings of the XIth International Conference of the Italian Association for Artificial Intelligence Reggio Emilia on Emergent Perspectives in Artificial Intelligence
Online temporal pattern learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Parallelizing two classes of neuromorphic models on the cell multicore architecture
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Formal Analysis of Models for the Mammalian Vision System
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Research frontier: deep machine learning--a new frontier in artificial intelligence research
IEEE Computational Intelligence Magazine
Cloud based unsupervised learning architecture based on mirroring neural networks
AIKED'11 Proceedings of the 10th WSEAS international conference on Artificial intelligence, knowledge engineering and data bases
Incremental learning by message passing in hierarchical temporal memory
ANNPR'12 Proceedings of the 5th INNS IAPR TC 3 GIRPR conference on Artificial Neural Networks in Pattern Recognition
Hi-index | 0.00 |
The brains of mammals are very efficient learning machines. Many aspects of mammalian learning are yet to be incorporated into machine learning algorithms. For instance, vision is typically considered to be a spatial problem in which a learning system needs to be trained with labeled examples of object images. Yet, mammals learn with continuously flowing unlabeled data. It is also generally accepted that the visual cortex in mammals is organized as a hierarchy and that many aspects of visual perception can be modeled using Bayesian computations. This dissertation introduces algorithms and networks that combine hierarchical and temporal learning with Bayesian inference for pattern recognition. These algorithms and networks, collectively called Hierarchical Temporal Memory (HTM), can be used to learn hierarchical-temporal models of data. Temporal continuity is used to learn multiple levels of the hierarchy without supervision. The HTM algorithms, when applied to a visual pattern recognition problem, exhibit invariant recognition, robustness to noise, and generalization. Inference in the hierarchy is performed using Bayesian belief propagation equations that are adapted to this problem setting. In order to understand the generalization properties of HTMs, a generative model for HTMs is developed. This model enables the generation of synthetic data from HTM networks. These data are used to analyze and characterize learning and generalization in hierarchical-temporal systems. Two existing hierarchical pattern recognition models are mapped to HTMs to explain the source of generalization in those models. Finally, the HTM Bayesian belief propagation equations are used to suggest a mathematical model for cortical microcircuits. The microcircuit model is derived by combining known anatomical constraints with the computational specifications of HTM belief propagation. The proposed model has a laminar and columnar organization that matches many known anatomical features. The proposed circuits are then used in the modeling of two well known physiological phenomena.