Learning from Innate Behaviors: A Quantitative Evaluation of Neural Network Controllers
Machine Learning - Special issue on learning in autonomous robots
Training a Vision Guided Mobile Robot
Machine Learning - Special issue on learning in autonomous robots
Embedding Connectionist Autonomous Agents in Time: The ‘Road Sign Problem’
Neural Processing Letters
Training a Vision Guided Mobile Robot
Autonomous Robots
Acquiring Mobile Robot Behaviors by Learning Trajectory Velocities
Autonomous Robots
Sequence Learning: From Recognition and Prediction to Sequential Decision Making
IEEE Intelligent Systems
Learning a Navigation Task in Changing Environments by Multi-task Reinforcement Learning
EWLR-8 Proceedings of the 8th European Workshop on Learning Robots: Advances in Robot Learning
Genetic programming for robot vision
ICSAB Proceedings of the seventh international conference on simulation of adaptive behavior on From animals to animats
Optimizing Digital Hardware Perceptrons for Multi-Spectral Image Classification
Journal of Mathematical Imaging and Vision
Evolution of Neural Architecture Fitting Environmental Dynamics
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
Adaptive Behavior in Autonomous Agents
Presence: Teleoperators and Virtual Environments
Acquiring Rules for Rules: Neuro-Dynamical Systems Account for Meta-Cognition
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
ACST '08 Proceedings of the Fourth IASTED International Conference on Advances in Computer Science and Technology
Hi-index | 0.00 |
By beginning with simple reactive behaviors and gradually building up to more memory-dependent behaviors, it may be possible for connectionist systems to eventually achieve the level of planning. This paper focuses on an intermediate step in this incremental process, where the appropriate means of providing guidance to adapting controllers is explored. A local and a global method of reinforcement learning are contrasted-a special form of back-propagation and an evolutionary algorithm. These methods are applied to a neural network controller for a simple robot. A number of experiments are described where the presence of explicit goals and the immediacy of reinforcement are varied. These experiments reveal how various types of guidance can affect the final control behavior. The results show that the respective advantages and disadvantages of these two adaptation methods are complementary, suggesting that some hybrid of the two may be the most effective method. Concluding remarks discuss the next incremental steps toward more complex control behaviors