Learning temporal structure for task based control

  • Authors:
  • Kingsley Sage;A. Jonathan Howell;Hilary Buxton;Antonis Argyros

  • Affiliations:
  • Department of Informatics, Centre for Research in Cognitive Science, University of Sussex, Brighton BN1 9QH, UK;Department of Informatics, Centre for Research in Cognitive Science, University of Sussex, Brighton BN1 9QH, UK;Department of Informatics, Centre for Research in Cognitive Science, University of Sussex, Brighton BN1 9QH, UK;Institute of Computer Science, Foundation for Research and Technology Hellas, P.O. Box 1385, GR 711 10 Heraklion, Crete, Greece

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an extension for variable length Markov models (VLMMs) to allow for modelling of continuous input data and show that the generative properties of these VLMMs are a powerful tool for dealing with real world tracking issues. We explore methods for addressing the temporal correspondence problem in the context of a practical hand tracker, which is essential to support expectation in task-based control using these behavioural models. The hand tracker forms a part of a larger multi-component distributed system, providing 3-D hand position data to a gesture recogniser client. We show how the performance of such a hand tracker can be improved by using feedback from the gesture recogniser client. In particular, feedback based on the generative extrapolation of the recogniser's internal models is shown to help the tracker deal with mid-term occlusion. We also show that VLMMs can be used as a means to inform the prior in an expectation maximisation (EM) process used for joint spatial and temporal learning of image features.