A survey of algorithmic methods for partially observed Markov decision processes
Annals of Operations Research
Robot shaping: developing autonomous agents through learning
Artificial Intelligence
Representation and processing of spatial expressions
Reinforcement learning with hierarchies of machines
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
The MAXQ Method for Hierarchical Reinforcement Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Adaptive Resolution Model-Free Reinforcement Learning: Decision Boundary Partitioning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A Taxonomy of Granular Partitions
COSIT 2001 Proceedings of the International Conference on Spatial Information Theory: Foundations of Geographic Information Science
Generalizing Graphs Using Amalgamation and Selection
SSD '99 Proceedings of the 6th International Symposium on Advances in Spatial Databases
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Autonomous shaping: knowledge transfer in reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Probabilistic policy reuse in a reinforcement learning agent
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
Cross-domain transfer for reinforcement learning
Proceedings of the 24th international conference on Machine learning
Value functions for RL-based behavior transfer: a comparative study
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Variable resolution discretization for high-accuracy solutions of optimal control problems
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
IJCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence - Volume 1
Building portable options: skill transfer in reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
State abstraction discovery from irrelevant state variables
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Model minimization in Markov decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Skill acquisition via transfer learning and advice taking
ECML'06 Proceedings of the 17th European conference on Machine Learning
Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters
IEEE Transactions on Robotics
Learning research in knowledge transfer
WISM'12 Proceedings of the 2012 international conference on Web Information Systems and Mining
Forward and backward feature selection in gradient-based MDP algorithms
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Machine learning for interactive systems and robots: a brief introduction
Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication
Learning potential functions and their representations for multi-task reinforcement learning
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
In this article we investigate the role of abstraction principles for knowledge transfer in agent control learning tasks. We analyze abstraction from a formal point of view and characterize three distinct facets: aspectualization, coarsening, and conceptual classification. The taxonomy we develop allows us to interrelate existing approaches to abstraction, leading to a code of practice for designing knowledge representations that support knowledge transfer. We detail how aspectualization can be utilized to achieve knowledge transfer in reinforcement learning. We propose the use of so-called structure space aspectualizable knowledge representations that explicate structural properties of the state space and present a posteriori structure space aspectualization (APSST) as a method to extract generally sensible behavior from a learned policy. This new policy can be used for knowledge transfer to support learning new tasks in different environments. Finally, we present a case study that demonstrates transfer of generally sensible navigation skills from simple simulation to a real-world robotic platform.