Proceedings of the seventh international conference (1990) on Machine learning
Practical Issues in Temporal Difference Learning
Machine Learning
Technical Note: \cal Q-Learning
Machine Learning
The Convergence of TD(λ) for General λ
Machine Learning
Reinforcement learning for robots using neural networks
Reinforcement learning for robots using neural networks
TD(λ) Converges with Probability 1
Machine Learning
Reinforcement learning algorithms for average-payoff Markovian decision processes
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Introduction to Stochastic Dynamic Programming: Probability and Mathematical
Introduction to Stochastic Dynamic Programming: Probability and Mathematical
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Temporal credit assignment in reinforcement learning
Temporal credit assignment in reinforcement learning
On the convergence of stochastic iterative dynamic programming algorithms
Neural Computation
Machine Learning
Reinforcement learning for fuzzy agents: application to a pighouse environment control
New learning paradigms in soft computing
The Knowledge Engineering Review
Robustness Analysis of SARSA(λ): Different Models of Reward and Initialisation
AIMSA '08 Proceedings of the 13th international conference on Artificial Intelligence: Methodology, Systems, and Applications
From Q(λ) to average Q-learning: efficient implementation of an asymptotic approximation
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
The improvement of Q-learning applied to imperfect information game
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Enhanced temporal difference learning using compiled eligibility traces
AI'06 Proceedings of the 19th Australian joint conference on Artificial Intelligence: advances in Artificial Intelligence
Reinforcement Learning with Reward Shaping and Mixed Resolution Function Approximation
International Journal of Agent Technologies and Systems
Safe exploration of state and action spaces in reinforcement learning
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Temporal difference (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor λ. Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the efficient and general implementation of TD(λ) for arbitrary λ, for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suffer from both inefficiency and lack of generality. The TTD (Truncated Temporal Differences) procedure is proposed as an alternative, that indeed only approximates TD(λ), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using λ 0 with the TTD procedure allows one to obtain a significant learning speedup at essentially the same cost as usual TD(0) learning.