Interactive learning of mappings from visual percepts to actions
ICML '05 Proceedings of the 22nd international conference on Machine learning
Closed-loop learning of visual control policies
Journal of Artificial Intelligence Research
Learning visual representations for perception-action systems
International Journal of Robotics Research
Approximate policy iteration for closed-loop learning of visual tasks
ECML'06 Proceedings of the 17th European conference on Machine Learning
Hi-index | 0.00 |
Solving a visual, interactive task can often be thought of as building a mapping from visual stimuli to appropriate actions. Clearly, the extracted visual characteristics that index into the repertoire of actions must be sufficiently rich to distinguish situations that demand distinct actions. Spatial combinations of local features permit, in principle, the construction of features at various levels of discriminative power. We present an algorithm for selecting relevant spatial combinations of visual features by exercising a given task in a closed-loop learning process based on Reinforcement Learning. The algorithm operates by progressively splitting the perceptual space into distinct regions. Whenever the agent detects perceptual aliasing of distinct world states, it constructs a spatial combination of visual features that disambiguates the aliased states. We demonstrate the efficacy of our algorithm on a version of the classical "Car on the Hill" control problem where position and velocity are presented to the agent visually, in a way that the task is unsolvable using individual point features.