Proceedings of the seventh international conference (1990) on Machine learning
Learning by experimentation: the operator refinement method
Machine learning
An integrated architecture for learning and planning in robotic domains
ACM SIGART Bulletin
Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Automatic acquisition of task theories for robotic manipulation
Automatic acquisition of task theories for robotic manipulation
Reinforcement learning of non-Markov decision processes
Artificial Intelligence - Special volume on computational research on interaction and agency, part 2
Real-world robotics: learning to plan for robust execution
Machine Learning - Special issue on robot learning
Learning concepts from sensor data of a mobile robot
Machine Learning - Special issue on robot learning
Lazy Incremental Learning of Control Knowledge for EfficientlyObtaining Quality Plans
Artificial Intelligence Review - Special issue on lazy learning
An Integrated Approach of Learning, Planning, and Execution
Journal of Intelligent and Robotic Systems
Discovery as Autonomous Learning from the Environment
Machine Learning
Planning, Learning, and Executing in Autonomous Systems
ECP '97 Proceedings of the 4th European Conference on Planning: Recent Advances in AI Planning
Journal of Artificial Intelligence Research
Heuristics for inductive learning
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
Exploiting structure in policy construction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Hi-index | 0.00 |
Very few learning systems applied to problem solving have focused on learning operator definitions from the interaction with a completely unknown environment. In order to achieve better learning convergence, several agents that learn separately are allowed to interchange each learned set of planning operators. Learning is achieved by establishing plans, executing those plans in the environment, analyzing the results of the execution, and combining new evidence with prior evidence. Operators are generated incrementally by combining rote learning, induction, and a variant of reinforcement learning. The results show how allowing the communication among individual learning (and planning) agents provides a much better percentage of successful plans, plus an improved convergence rate than the individual agents alone.