A model for reasoning about persistence and causation
Computational Intelligence
Proceedings of the seventh international conference (1990) on Machine learning
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
Learning Bayesian networks with local structure
Learning in graphical models
Stochastic dynamic programming with factored representations
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Incremental Induction of Decision Trees
Machine Learning
Machine Learning
Algorithm-Directed Exploration for Model-Based Reinforcement Learning in Factored MDPs
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Reinforcement Learning with Factored States and Actions
The Journal of Machine Learning Research
Efficient solution algorithms for factored MDPs
Journal of Artificial Intelligence Research
Exploiting structure in policy construction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
SPUDD: stochastic planning using decision diagrams
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
A Bayesian approach to learning Bayesian networks with local structure
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Exploiting Additive Structure in Factored MDPs for Reinforcement Learning
Recent Advances in Reinforcement Learning
Generalized model learning for reinforcement learning in factored domains
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
Anticipatory Learning Classifier Systems and Factored Reinforcement Learning
Anticipatory Behavior in Adaptive Learning Systems
Considering Unseen States as Impossible in Factored Reinforcement Learning
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Efficient structure learning in factored-state MDPs
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Autonomously learning an action hierarchy using a learned qualitative state representation
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
TeXDYNA: hierarchical reinforcement learning in factored MDPs
SAB'10 Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats
Handling ambiguous effects in action learning
EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
Recognizing internal states of other agents to anticipate and coordinate interactions
EUMAS'11 Proceedings of the 9th European conference on Multi-Agent Systems
Hi-index | 0.00 |
Recent decision-theoric planning algorithms are able to find optimal solutions in large problems, using Factored Markov Decision Processes (FMDPs). However, these algorithms need a perfect knowledge of the structure of the problem. In this paper, we propose SDYNA, a general framework for addressing large reinforcement learning problems by trial-and-error and with no initial knowledge of their structure. SDYNA integrates incremental planning algorithms based on FMDPs with supervised learning techniques building structured representations of the problem. We describe SPITI, an instantiation of SDYNA, that uses incremental decision tree induction to learn the structure of a problem combined with an incremental version of the Structured Value Iteration algorithm. We show that SPITI can build a factored representation of a reinforcement learning problem and may improve the policy faster than tabular reinforcement learning algorithms by exploiting the generalization property of decision tree induction algorithms.