TD-Gammon, a self-teaching backgammon program, achieves master-level play
Neural Computation
Reinforcement learning with replacing eligibility traces
Machine Learning - Special issue on reinforcement learning
Creating advice-taking reinforcement learners
Machine Learning - Special issue on reinforcement learning
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Gradient descent for general reinforcement learning
Proceedings of the 1998 conference on Advances in neural information processing systems II
Layered Learning in Multiagent Systems: A Winning Approach to Robotic Soccer
Layered Learning in Multiagent Systems: A Winning Approach to Robotic Soccer
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Discovering Hierarchy in Reinforcement Learning with HEXQ
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
The MAXQ Method for Hierarchical Reinforcement Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Keepaway Soccer: A Machine Learning Testbed
RoboCup 2001: Robot Soccer World Cup V
Least-squares policy iteration
The Journal of Machine Learning Research
Recursive Adaptation of Stepsize Parameter for Non-stationary Environments
PRIMA '09 Proceedings of the 12th International Conference on Principles of Practice in Multi-Agent Systems
Recursive adaptation of stepsize parameter for non-stationary environments
ALA'09 Proceedings of the Second international conference on Adaptive and Learning Agents
Hi-index | 0.00 |
Reinforcement learning is a popular and successful framework for many agent-related problems because only limited environmental feedback is necessary for learning. While many algorithms exist to learn effective policies in such problems, learning is often used to solve real world problems, which typically have large state spaces, and therefore suffer from the "curse of dimensionality." One effective method for speeding-up reinforcement learning algorithms is to leverage expert knowledge. In this paper, we propose a method for dynamically augmenting the agent's feature set in order to speed up value-function-based reinforcement learning. The domain expert divides the feature set into a series of subsets such that a novel problem concept can be learned from each successive subset. Domain knowledge is also used to order the feature subsets in order of their importance for learning. Our algorithm uses the ordered feature subsets to learn tasks significantly faster than if the entire feature set is used from the start. Incremental Feature-Set Augmentation (IFSA) is fully implemented and tested in three different domains: Gridworld, Blackjack and RoboCup Soccer Keepaway. All experiments show that IFSA can significantly speed up learning and motivates the applicability of this novel RL method.