Machine Learning
Temporal difference learning and TD-Gammon
Communications of the ACM
Tree based discretization for continuous state space reinforcement learning
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
On growing better decision trees from data
On growing better decision trees from data
Input generalization in delayed reinforcement learning: an algorithm and performance comparisons
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 2
RL-Glue: Language-Independent Software for Reinforcement-Learning Experiments
The Journal of Machine Learning Research
Hi-index | 0.00 |
In reinforcement learning, when dimensionality of the state space increases, making use of state abstraction seems inevitable. Among the methods proposed to solve this problem, decision tree based methods could be useful as they provide automatic state abstraction. But existing methods use univariate, therefore axis-aligned, splits in decision nodes, imposing hyper-rectangular partitioning of the state space. In some applications, multivariate splits can generate smaller and more accurate trees. In this paper, we use oblique decision trees as an instance of multivariate trees to implement state abstraction for reinforcement learning agents. Simulation results on mountain car and puddle world tasks show significant improvement in the average received rewards, average number of steps to finish the task, and size of the trees both in learning and test phases.