Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Continual learning in reinforcement environments
Continual learning in reinforcement environments
Learning to learn
Monte Carlo localization: efficient position estimation for mobile robots
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Explanation-Based Neural Network Learning: A Lifelong Learning Approach
Explanation-Based Neural Network Learning: A Lifelong Learning Approach
Shape Detection in Computer Vision Using the Hough Transform
Shape Detection in Computer Vision Using the Hough Transform
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Directed Sonar Sensing for Mobile Robot Navigation
Directed Sonar Sensing for Mobile Robot Navigation
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Module Based Reinforcement Learning: An Application to a Real Robot
EWLR-6 Proceedings of the 6th European Workshop on Learning Robots
Fast Grid-Based Position TRacking for Mobile Robots
KI '97 Proceedings of the 21st Annual German Conference on Artificial Intelligence: Advances in Artificial Intelligence
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Learning to act using real-time dynamic programming
Artificial Intelligence
An incremental approach to developing intelligent neural networkcontrollers for robots
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
This work is concerned with practical issues surrounding the application of reinforcement learning to a mobile robot. The robot's task is to navigate in a controlled environment and to collect objects using its gripper. Our aim is to build a control system that enables the robot to learn incrementally and to adapt to changes in the environment. The former is known as multi-task learning, the latter is usually referred to as continual 'lifelong' learning. First, we emphasize the connection between adaptive state-space quantisation and continual learning. Second, we describe a novel method for multi-task learning in reinforcement environments. This method is based on constructive neural networks and uses instance-based learning and dynamic programming to compute a task-dependent agent-internal state space. Third, we describe how the learning system is integrated with the control architecture of the robot. Finally, we investigate the capabilities of the learning algorithm with respect to the transfer of information between related reinforcement learning tasks, like navigation tasks in different environments. It is hoped that this method will lead to a speed-up in reinforcement learning and enable an autonomous robot to adapt its behaviour as the environment changes.