Affordances, motivations, and the world graph theory
Adaptive Behavior - Special issue on biologically inspired models of navigation
Reinforcement learning with hierarchies of machines
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Recent Advances in Hierarchical Reinforcement Learning
Discrete Event Dynamic Systems
PolicyBlocks: An Algorithm for Creating Useful Macro-Actions in Reinforcement Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Discovering Hierarchy in Reinforcement Learning with HEXQ
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Intra-Option Learning about Temporally Abstract Actions
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Eligibility Traces for Off-Policy Policy Evaluation
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Reusing Old Policies to Accelerate Learning on New MDPs TITLE2:
Reusing Old Policies to Accelerate Learning on New MDPs TITLE2:
Using Options for Knowledge Transfer in Reinforcement Learning TITLE2:
Using Options for Knowledge Transfer in Reinforcement Learning TITLE2:
Identifying useful subgoals in reinforcement learning by local graph partitioning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Autonomous shaping: knowledge transfer in reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
Transfer of samples in batch reinforcement learning
Proceedings of the 25th international conference on Machine learning
Transfer of task representation in reinforcement learning using policy-based proto-value functions
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
Spatial Abstraction: Aspectualization, Coarsening, and Conceptual Classification
Proceedings of the international conference on Spatial Cognition VI: Learning, Reasoning, and Talking about Space
Representing and Selecting Landmarks in Autonomous Learning of Robot Navigation
ICIRA '08 Proceedings of the First International Conference on Intelligent Robotics and Applications: Part I
Transfer via soft homomorphisms
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Improving Batch Reinforcement Learning Performance through Transfer of Samples
Proceedings of the 2008 conference on STAIRS 2008: Proceedings of the Fourth Starting AI Researchers' Symposium
Autonomous robot skill acquisition
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Learning to generalize and reuse skills using approximate partial policy homomorphisms
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
A reinforcement learning model using macro-actions in multi-task grid-world problems
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Transfer Learning for Reinforcement Learning Domains: A Survey
The Journal of Machine Learning Research
Skill combination for reinforcement learning
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Generalization and transfer learning in noise-affected robot navigation tasks
EPIA'07 Proceedings of the aritficial intelligence 13th Portuguese conference on Progress in artificial intelligence
Learning relational options for inductive transfer in relational reinforcement learning
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Structural knowledge transfer by spatial abstraction for reinforcement learning agents
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Review: learning like a baby: A survey of artificial intelligence approaches
The Knowledge Engineering Review
Robot learning from demonstration by constructing skill trees
International Journal of Robotics Research
Abstraction and generalization in reinforcement learning: a summary and framework
ALA'09 Proceedings of the Second international conference on Adaptive and Learning Agents
Beyond reward: the problem of knowledge and data
ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
Learning exploration strategies in model-based reinforcement learning
Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems
Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication
Strategic cognitive sequencing: a computational cognitive neuroscience approach
Computational Intelligence and Neuroscience - Special issue on Neurocognitive Models of Sense Making
Hi-index | 0.01 |
The options framework provides methods for reinforcement learning agents to build new high-level skills. However, since options are usually learned in the same state space as the problem the agent is solving, they cannot be used in other tasks that are similar but have different state spaces. We introduce the notion of learning options in agentspace, the space generated by a feature set that is present and retains the same semantics across successive problem instances, rather than in problemspace. Agent-space options can be reused in later tasks that share the same agent-space but have different problem-spaces. We present experimental results demonstrating the use of agent-space options in building transferrable skills, and show that they perform best when used in conjunction with problem-space options.