Learning Functional Object-Categories from a Relational Spatio-Temporal Representation

  • Authors:
  • Muralikrishna Sridhar;Anthony G. Cohn;David C. Hogg

  • Affiliations:
  • School of Computing, University of Leeds, Leeds, UK, email: krishna@comp.leeds.ac.uk;School of Computing, University of Leeds, Leeds, UK, email: agc@comp.leeds.ac.uk;School of Computing, University of Leeds, Leeds, UK, email: dch@comp.leeds.ac.uk

  • Venue:
  • Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a framework that learns functional object-categories from spatio-temporal data sets such as those abstracted from video. The data is represented as one activity graph that encodes qualitative spatio-temporal patterns of interaction between objects. Event classes are induced by statistical generalization, the instances of which encode similar patterns of spatio-temporal relationships between objects. Equivalence classes of objects are discovered on the basis of their similar role in multiple event instantiations. Objects are represented in a multidimensional space that captures their role in all the events. Unsupervised learning in this space results in functional object-categories. Experiments in the domain of food preparation suggest that our techniques represent a significant step in unsupervised learning of functional object categories from spatio-temporal patterns of object interaction.