Proceedings of the seventh international conference (1990) on Machine learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Self-improving reactive agents: case studies of reinforcement learning frameworks
Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Temporal credit assignment in reinforcement learning
Temporal credit assignment in reinforcement learning
Learning in embedded systems
Learning to act using real-time dynamic programming
Artificial Intelligence
Robot learning from demonstration by constructing skill trees
International Journal of Robotics Research
Hi-index | 0.00 |
Programming robots is a tedious task. So, there is growing interest in building robots which can learn by themselves. Self-improving, which involves trial and error, however, is often a slow process and could be hazardous in a hostile environment. By teaching robots how tasks can be achieved, learning time can be shortened and hazard can be minimized. This paper presents a general approach to making robots which can improve their performance from experiences as well as from being taught. Based on this proposed approach and other learning speedup techniques, a simulated learning robot was developed and could learn three moderately complex behaviors, which were then integrated in a subsumption style so that the robot could navigate and recharge itself. Interestingly, a real robot could actually use what was learned in the simulator to operate in the real world quite successfully.