Proceedings of the seventh international conference (1990) on Machine learning
Learning to Perceive and Act by Trial and Error
Machine Learning
Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Reinforcement learning with hidden states
Proceedings of the second international conference on From animals to animats 2 : simulation of adaptive behavior: simulation of adaptive behavior
Operant conditioning in skinnerbots
Adaptive Behavior - Special issue on environment structure and behavior
Constructive incremental learning from only local information
Neural Computation
The handbook of brain theory and neural networks
Static and Dynamic Configurable Systems
IEEE Transactions on Computers
An Behavior-based Robotics
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Mobile Robot Miniaturisation: A Tool for Investigation in Control Algorithms
The 3rd International Symposium on Experimental Robotics III
The FAST Architecture: A Neural Network with Flexible Adaptable-Size Topology
MICRONEURO '96 Proceedings of the 5th International Conference on Microelectronics for Neural Networks and Fuzzy Systems
Intelligence Without Reason
Rapid, safe, and incremental learning of navigation strategies
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Model-based learning for mobile robot navigation from the dynamicalsystems perspective
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
This chapter presents a neurocontroller architecture for autonomous mobile robot navigation. The main characteristic of such neurocontroller is that it is non-computationally-intensive. It provides a learning robot with the capability to autonomously categorize input data from the environment, to deal with the stability-plasticity dilemma, and to learn a state-to-action mapping that enables it to navigate in a workspace while avoiding obstacles. The neurocontroller architecture is composed of three main modules: an adaptive categorization module, implemented by an unsupervised learning neural architecture called FAST (Flexible Adaptable-Size Topology), a reinforcement learning module (SARSA), and a short-term memory or a planning module, intended to accelerate the learning of behaviors. We describe the use of our neurocontroller in three navigation tasks, each involving a different kind of sensor: 1) obstacle avoidance using infra-red proximity sensors, 2) foraging using a color CCD camera, and 3) wall-following using a grey-level linear vision system.