Learning-based dynamic coupling of discrete and continuous trackers

  • Authors:
  • Gabriel Tsechpenakis;Dimitris Metaxas;Carol Neidle

  • Affiliations:
  • Center for Computational Biomedicine, Imaging and Modeling (CBIM), Computer Science Department, Rutgers University, Piscataway, NJ;Center for Computational Biomedicine, Imaging and Modeling (CBIM), Computer Science Department, Rutgers University, Piscataway, NJ;Linguistics Program, Department of Modern Foreign Languages and Literatures, Boston University, Boston, MA

  • Venue:
  • Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a data-driven dynamic coupling between discrete and continuous methods for tracking objects of high dofs, which overcomes the limitations of previous techniques. In our approach, two trackers work in parallel, and the coupling between them is based on the tracking error. We use a model-based continuous method to achieve accurate results and, in cases of failure, we re-initialize the model using our discrete tracker. This method maintains the accuracy of a more tightly coupled system, while increasing its efficiency. At any given frame, our discrete tracker uses the current and several previous frames to search into a database for the best matching solution. For improved robustness, object configuration sequences, rather than single configurations, are stored in the database. We apply our framework to the problem of 3D hand tracking from image sequences and the discrimination between fingerspelling and continuous signs in American Sign Language.