Model Minimization in Hierarchical Reinforcement Learning
Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
Exploiting symmetries in POMDPs for point-based algorithms
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
State similarity based approach for improving performance in RL
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
SMDP homomorphisms: an algebraic approach to abstraction in semi-Markov decision processes
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Exploiting symmetries for single- and multi-agent Partially Observable Stochastic Domains
Artificial Intelligence
Hi-index | 0.00 |
Current solution and modelling approaches to Markov Decision Processes (MDPs) scale poorly with the size of the MDP. Model minimization methods address this issue by exploiting redundancy in problem specification to reduce the size of the MDP model. Symmetries in a problem specification can give rise to special forms of redundancy that are not exploited by existing minimization methods. In this work we extend the model minimization framework proposed by Dean and Givan to include symmetries. We base our framework on concepts derived from finite state automata and group theory.