affordance mining: forming perception through action

  • Authors:
  • Liam Ellis;Michael Felsberg;Richard Bowden

  • Affiliations:
  • CVL, Linköping University, Linköping, Sweden and CVSSP, University of Surrey, Guildford, UK;CVL, Linköping University, Linköping, Sweden;CVSSP, University of Surrey, Guildford, UK

  • Venue:
  • ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part IV
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work employs data mining algorithms to discover visual entities that are strongly associated to autonomously discovered modes of action, in an embodied agent. Mappings are learnt from these perceptual entities, onto the agents action space. In general, low dimensional action spaces are better suited to unsupervised learning than high dimensional percept spaces, allowing for structure to be discovered in the action space, and used to organise the perceptual space. Local feature configurations that are strongly associated to a particular 'type' of action (and not all other action types) are considered likely to be relevant in eliciting that action type. By learning mappings from these relevant features onto the action space, the system is able to respond in real time to novel visual stimuli. The proposed approach is demonstrated on an autonomous navigation task, and the system is shown to identify the relevant visual entities to the task and to generate appropriate responses.