C4.5: programs for machine learning
C4.5: programs for machine learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Transfer of Experience Between Reinforcement Learning Environments with Progressive Difficulty
Artificial Intelligence Review
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Accelerating autonomous learning by using heuristic selection of actions
Journal of Heuristics
Improving action selection in MDP's via knowledge transfer
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Top-down induction of first-order logical decision trees
Artificial Intelligence
Transfer Learning for Reinforcement Learning Domains: A Survey
The Journal of Machine Learning Research
SPUDD: stochastic planning using decision diagrams
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Forward and backward feature selection in gradient-based MDP algorithms
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Hi-index | 0.00 |
Most work in navigation approaches for mobile robots does not take into account existing solutions to similar problems when learning a policy to solve a new problem, and consequently solves the current navigation problem from scratch. In this article we investigate a knowledge transfer technique that enables the use of a previously know policy from one or more related source tasks in a new task. Here we represent the knowledge learned as a stochastic abstract policy, which can be induced from a training set given by a set of navigation examples of state-action sequences executed successfully by a robot to achieve a specific goal in a given environment. We propose both a probabilistic and a nondeterministic abstract policy, in order to preserve the occurrence of all actions identified in the inductive process. Experiments carried out attest to the effectiveness and efficiency of our proposal.