Macro-operators: a weak method for learning
Artificial Intelligence - Lecture notes in computer science 178
Machine Learning - Special issue on case-based reasoning
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
A Heuristic Approach to the Discovery of Macro-Operators
Machine Learning
Temporal abstraction in reinforcement learning
Temporal abstraction in reinforcement learning
Autonomous discovery of temporal abstractions from interaction with an environment
Autonomous discovery of temporal abstractions from interaction with an environment
Learning and applying temporal patterns through experience
Learning and applying temporal patterns through experience
A formal framework for speedup learning from problems and solutions
Journal of Artificial Intelligence Research
A selective macro-learning algorithm and its application to the N × N sliding-tile puzzle
Journal of Artificial Intelligence Research
Player Co-Modelling in a Strategy Board Game: Discovering How to Play Fast
Cybernetics and Systems
Proceedings of the 2008 conference on Knowledge-Based Software Engineering: Proceedings of the Eighth Joint Conference on Knowledge-Based Software Engineering
Hi-index | 0.00 |
Learning reusable sequences can support the development of expertise in many domains, either by improving decision-making quality or decreasing execution speed. This paper introduces and evaluates a method to learn action sequences for generalized states from prior problem experience. From experienced sequences, the method induces the context that underlies a sequence of actions. Empirical results indicate that the sequences and contexts learned for a class of problems are actually those deemed important by experts for that particular class, and can be used to select appropriate action sequences when solving problems there.