A tree-structured model of visual appearance applied to gaze tracking

  • Authors:
  • Jeffrey B. Mulligan

  • Affiliations:
  • NASA Ames Research Center

  • Venue:
  • ISVC'05 Proceedings of the First international conference on Advances in Visual Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In some computer vision applications, we may need to analyze large numbers of similar frames depicting various aspects of an event. In this situation, the appearance may change significantly within the sequence, hampering efforts to track particular features. Active shape models [1] offer one approach to this problem, by ”learning” the relationship between appearance and world-state from a small set of hand-labeled training examples. In this paper we propose a method for partitioning the input image set which addresses two problems: first, it provides an automatic method for selecting a set of training images for hand-labeling; second, it results in a partitioning of the image space into regions suitable for local model adaptation. Repeated application of the partitioning procedure results in a tree-structured representation of the image space. The resulting structure can be used to define corresponding neighborhoods in the shape model parameter space; a new image may be processed efficiently by first inserting it into the tree, and then solving for model parameters within the corresponding restricted domain. The ideas are illustrated with examples from an outdoor gaze-tracking application.