TD-Gammon, a self-teaching backgammon program, achieves master-level play
Neural Computation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Function approximation via tile coding: automating parameter choice
SARA'05 Proceedings of the 6th international conference on Abstraction, Reformulation and Approximation
Hi-index | 0.00 |
In large and continuous state-action spaces reinforcement learning heavily relies on function approximation techniques. Tile coding is a well-known function approximator that has been successfully applied to many reinforcement learning tasks. In this paper we introduce the hyperplane tile coding, in which the usual tiles are replaced by parameterized hyperplanes that approximate the action-value function. We compared the performance of hyperplane tile coding with the usual tile coding on three well-known benchmark problems. Our results suggest that the hyperplane tiles improve the generalization capabilities of the tile coding approximator: in the hyperplane tile coding broad generalizations over the problem space result only in a soft degradation of the performance, whereas in the usual tile coding they might dramatically affect the performance.