Simple neural models of classical conditioning
Biological Cybernetics
A massively parallel architecture for a self-organizing neural pattern recognition machine
Computer Vision, Graphics, and Image Processing
Bidirectional associative memories
IEEE Transactions on Systems, Man and Cybernetics
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Neural computation and self-organizing maps: an introduction
Neural computation and self-organizing maps: an introduction
Machine Learning - Special issue on robot learning
A temporal model of linear anti-Hebbian learning
Neural Processing Letters
Self-organizing maps
Spatiotemporal association in neural networks
The handbook of brain theory and neural networks
The handbook of brain theory and neural networks
Introduction to Robotics: Mechanics and Control
Introduction to Robotics: Mechanics and Control
Incremental learning of complex temporal patterns
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We have combined competitive and Hebbian learning in a neural network designed to learn and recall complex spatiotemporal sequences. In such sequences, a particular item may occur more than once or the sequence may share states with another sequence. Processing of repeated/shared states is a hard problem that occurs very often in the domain of robotics. The proposed model consists of two groups of synaptic weights: competitive interlayer and Hebbian intralayer connections, which are responsible for encoding respectively the spatial and temporal features of the input sequence. Three additional mechanisms allow the network to deal with shared states: context units, neurons disabled from learning, and redundancy used to encode sequence states. The network operates by determining the current and the next state of the learned sequences. The model is simulated over various sets of robot trajectories in order to evaluate its storage and retrieval abilities; its sequence sampling effects; its robustness to noise and its tolerance to fault.