A summary comparison of CMAC neural network and traditional adaptive control systems
Neural networks for control
Evolution and co-evolution of computer programs to control independently-acting agents
Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence
Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence
Learning in embedded systems
Automatic creation of an autonomous agent: genetic evolution of a neural-network driven robot
SAB94 Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3: from animals to animats 3
Neural Network Perception for Mobile Robot Guidance
Neural Network Perception for Mobile Robot Guidance
Reinforcement Learning in the Multi-Robot Domain
Autonomous Robots
Module-Based Reinforcement Learning: Experiments with a Real Robot
Autonomous Robots
Training a Vision Guided Mobile Robot
Autonomous Robots
Noise and the Reality Gap: The Use of Simulation in Evolutionary Robotics
Proceedings of the Third European Conference on Advances in Artificial Life
The 2nd International Symposium on Experimental Robotics II
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Evolution of homing navigation in a real mobile robot
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An incremental approach to developing intelligent neural networkcontrollers for robots
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Fuzzy Systems
A new approach to fuzzy-neural system modeling
IEEE Transactions on Fuzzy Systems
Efficient roughness recognition for velocity updating by wheeled-robots navigation
MCPR'10 Proceedings of the 2nd Mexican conference on Pattern recognition: Advances in pattern recognition
Hi-index | 0.00 |
The development of robots that learn from experience is a relentless challenge confronting artificial intelligence today. This paper describes a robot learning method which enables a mobile robot to simultaneously acquire the ability to avoid objects, follow walls, seek goals and control its velocity as a result of interacting with the environment without human assistance. The robot acquires these behaviors by learning how fast it should move along predefined trajectories with respect to the current state of the input vector. This enables the robot to perform object avoidance, wall following and goal seeking behaviors by choosing to follow fast trajectories near: the forward direction, the closest object or the goal location respectively. Learning trajectory velocities can be done relatively quickly because the required knowledge can be obtained from the robot's interactions with the environment without incurring the credit assignment problem. We provide experimental results to verify our robot learning method by using a mobile robot to simultaneously acquire all three behaviors.