Integrating POMDP and reinforcement learning for a two layer simulated robot architecture
Proceedings of the third annual conference on Autonomous Agents
Risk-Sensitive Reinforcement Learning
Machine Learning
Solving factored MDPs using non-homogeneous partitions
Artificial Intelligence - special issue on planning with uncertainty and incomplete information
Reinforcement Learning in Continuous Time and Space
Neural Computation
Performance Loss Bounds for Approximate Value Iteration with State Aggregation
Mathematics of Operations Research
An analysis of reinforcement learning with function approximation
Proceedings of the 25th international conference on Machine learning
Learning Representation and Control in Markov Decision Processes: New Frontiers
Foundations and Trends® in Machine Learning
Efficient solution algorithms for factored MDPs
Journal of Artificial Intelligence Research
Neural dynamic programming based temperature optimal control for cement calcined process
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
Q-learning with linear function approximation
COLT'07 Proceedings of the 20th annual conference on Learning theory
Coordinated learning in multiagent MDPs with infinite state-space
Autonomous Agents and Multi-Agent Systems
A competitive strategy for function approximation in Q-learning
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Discovering hidden structure in factored MDPs
Artificial Intelligence
Backward Q-learning: The combination of Sarsa algorithm and Q-learning
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
The success of reinforcement learning in practical problems depends on the ability to combine function approximation with temporal difference methods such as value iteration. Experiments in this area have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difficulty of reasoning about function approximators that generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal difference methods involving function approximators such as k-nearest-neighbor, and show experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of approximate value iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a different environment.