Models of neural networks
Learning in neural networks with material synapses
Neural Computation
Neural Networks and Natural Intelligence
Neural Networks and Natural Intelligence
Spike-driven synaptic dynamics generating working memory states
Neural Computation
Attractor Landscapes and Active Tracking: The Neurodynamics of Embodied Action
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Online organization of chaotic cell assemblies: a model for the cognitive map formation?
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Transients of active tracking: a stroll in attractor spaces
ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
Modeling of associative dynamics in hippocampal contributions to heuristic decision making
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Hi-index | 0.00 |
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides "on its own" the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more "respectful" of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as "frustrated chaos."