Articulated object tracking by rendering consistent appearance parts

  • Authors:
  • Zachary Pezzementi;Sandrine Voros;Gregory D. Hager

  • Affiliations:
  • Laboratory for Computational Science and Robotics, Johns Hopkins University, Baltimore, MD;Laboratory for Computational Science and Robotics, Johns Hopkins University, Baltimore, MD;Laboratory for Computational Science and Robotics, Johns Hopkins University, Baltimore, MD

  • Venue:
  • ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We describe a general methodology for tracking 3-dimensional objects in monocular and stereo video that makes use of GPU-accelerated filtering and rendering in combination with machine learning techniques. The method operates on targets consisting of kinematic chains with known geometry. The tracked target is divided into one or more areas of consistent appearance. The appearance of each area is represented by a classifier trained to assign a class-conditional probability to image feature vectors. A search is then performed on the configuration space of the target to find the maximum likelihood configuration. In the search, candidate hypotheses are evaluated by rendering a 3D model of the target object and measuring its consistency with the class probability map. The method is demonstrated for tool tracking on videos from two surgical domains, as well as in a human hand-tracking task.