Learning to Perceive and Act by Trial and Error
Machine Learning
Evaluation of Interest Point Detectors
International Journal of Computer Vision - Special issue on a special section on visual surveillance
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Neuro-Dynamic Programming
Variable Resolution Discretization in Optimal Control
Machine Learning
Q-Learning in Continuous State and Action Spaces
AI '99 Proceedings of the 12th Australian Joint Conference on Artificial Intelligence: Advanced Topics in Artificial Intelligence
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Tree-Based Batch Mode Reinforcement Learning
The Journal of Machine Learning Research
Interactive learning of mappings from visual percepts to actions
ICML '05 Proceedings of the 22nd international conference on Machine learning
Learning visual representations for perception-action systems
International Journal of Robotics Research
Hi-index | 0.00 |
We target the problem of closed-loop learning of control policies that map visual percepts to continuous actions. Our algorithm, called Reinforcement Learning of Joint Classes (RLJC), adaptively discretizes the joint space of visual percepts and continuous actions. In a sequence of attempts to remove perceptual aliasing, it incrementally builds a decision tree that applies tests either in the input perceptual space or in the output action space. The leaves of such a decision tree induce a piecewise constant, optimal state-action value function, which is computed through a reinforcement learning algorithm that uses the tree as a function approximator. The optimal policy is then derived by selecting the action that, given a percept, leads to the leaf that maximizes the value function. Our approach is quite general and applies also to learning mappings from continuous percepts to continuous actions. A simulated visual navigation problem illustrates the applicability of RLJC.