A distributed neural network architecture for hexapod robot locomotion
Neural Computation
Walking: a complex behavior controlled by simple networks
Adaptive Behavior - Special issue on computational neuroethology
Self-Organizing Maps
Fast Biped Walking with a Sensor-driven Neuronal Controller and Real-time Online Learning
International Journal of Robotics Research
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
A study of adaptive locomotive behaviors of a biped robot: patterns generation and classification
SAB'10 Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats
Hi-index | 0.00 |
Neurobiology studies showed that the role of the Anterior Cingulate Cortex of the brain is primarily responsible for avoiding repeated mistakes. According to vigilance threshold, which denotes the tolerance to risks, we can differentiate between a learning mechanism that takes risks, and one that averts risks. The tolerance to risk plays an important role in such learning mechanism. Results have shown the differences in learning capacity between risk-taking and risk avert behaviors. In this paper, we propose a learning mechanism that is able to learn from negative and positive feedback. It is composed of two phases, evaluation and decision-making phase. In the evaluation phase, we use a Kohonen Self Organizing Map technique to represent success and failure. Decision-making is based on an early warning mechanism that enables to avoid repeating past mistakes. Our approach is presented with an implementation on a simulated planar biped robot, controlled by a reflexive low-level neural controller. The learning system adapts the dynamics and range of a hip sensor neuron of the controller in order for the robot to walk on flat or sloped terrain. Results show that success and failure maps can learn better with a threshold that is more tolerant to risk. This gives rise to robustness to the controller even in the presence of slope variations.