A model for reasoning about persistence and causation
Computational Intelligence
Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Proceedings of the first international conference on Artificial intelligence planning systems
Practical Issues in Temporal Difference Learning
Machine Learning
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Statistical methods for speech recognition
Statistical methods for speech recognition
Solving very large weakly coupled Markov decision processes
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Multiagent systems: a modern approach to distributed artificial intelligence
Multiagent systems: a modern approach to distributed artificial intelligence
Elevator Group Control Using Multiple Reinforcement Learning Agents
Machine Learning
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
The Hierarchical Hidden Markov Model: Analysis and Applications
Machine Learning
Stochastic dynamic programming with factored representations
Artificial Intelligence
A reinforcement learning model of selective visual attention
Proceedings of the fifth international conference on Autonomous agents
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Hierarchical Optimization of Policy-Coupled Semi-Markov Decision Processes
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Continuous-Time Hierarchical Reinforcement Learning
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Model Minimization in Hierarchical Reinforcement Learning
Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
Decision-Theoretic Planning with Concurrent Temporally Extended Actions
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Hierarchical control and learning for markov decision processes
Hierarchical control and learning for markov decision processes
Hierarchical learning and planning in partially observable markov decision processes
Hierarchical learning and planning in partially observable markov decision processes
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
Learning topological maps with weak local odometric information
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Computing factored value functions for policies in structured MDPs
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Model minimization in Markov decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Hi-index | 0.00 |
Probabilistic finite state machines have become a popular modelingto ol for representing sequential processes, ranging from images and speech signals to text documents and spatial and genomic maps. In this paper, I describe two hierarchical abstraction mechanisms for simplifyingthe (estimation) learningand (control) optimization of complex Markov processes: spatial decomposition and temporal aggregation. I present several approaches to combiningspatial and temporal abstraction, drawingup on recent work of my group as well as that of others. I show how spatiotemporal abstraction enables improved solutions to three difficult sequential estimation and decision problems: hidden state modelingand control, learningparallel plans, and coordinatingwith multiple agents.