The problem of serial order: a neural network model of sequence learning and recall
Current research in natural language generation
The use of hierarchies for action selection
Proceedings of the second international conference on From animals to animats 2 : simulation of adaptive behavior: simulation of adaptive behavior
Self-organizing continuous attractor networks and motor function
Neural Networks
Sequential Memory: A Putative Neural and Synaptic Dynamical Mechanism
Journal of Cognitive Neuroscience
A model of corticostriatal plasticity for learning oculomotor associations and sequences
Journal of Cognitive Neuroscience
Dynamic field theory and embodied communication
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
When is a cognitive system embodied?
Cognitive Systems Research
A distributed behavioral model using neural fields
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
A dynamic field model of ordinal and timing properties of sequential events
ICANN'11 Proceedings of the 21st international conference on Artificial neural networks - Volume Part II
A dynamic field architecture for the generation of hierarchically organized sequences
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
Neuroevolution results in emergence of short-term memory in multi-goal environment
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Learning and generating serially ordered sequences of actions is a core component of cognition both in organisms and in artificial cognitive systems. When these systems are embodied and situated in partially unknown environments, specific constraints arise for any neural mechanism of sequence generation. In particular, sequential action must resist fluctuating sensory information and be capable of generating sequences in which the individual actions may vary unpredictably in duration. We provide a solution to this problem within the framework of Dynamic Field Theory by proposing an architecture in which dynamic neural networks create stable states at each stage of a sequence. These neural attractors are destabilized in a cascade of bifurcations triggered by a neural representation of a condition of satisfaction for each action. We implement the architecture on a robotic vehicle in a color search task, demonstrating both sequence learning and sequence generation on the basis of low-level sensory information.