Robust tracking of human body parts for collaborative human computer interaction

  • Authors:
  • Ediz Polat;Mohammed Yeasin;Rajeev Sharma

  • Affiliations:
  • Computer Science and Engineering Department, 220 Pond Lab., Pennsylvania State University, University Park, PA;Computer Science and Engineering Department, 220 Pond Lab., Pennsylvania State University, University Park, PA;Computer Science and Engineering Department, 220 Pond Lab., Pennsylvania State University, University Park, PA

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual analysis and tracking of human motion in a video sequence is very useful and was motivated by a wide spectrum of applications for example, surveillance and human computer interaction. The ability to track multiple people and their body parts (i.e., face and hands) in a complex environment is crucial for designing a collaborative natural human computer interaction (HCI). One of the most challenging issue in this context is detecting and tracking body parts of multiple people robustly in an unconstrained environment. More specific problem arises during tracking multiple body parts is the data association uncertainty while assigning measurements to the proper tracks in case of occlusion and close interaction of body parts. This paper describes a framework for tracking body parts (hands and faces) of multiple people in 2D/3D in an unconstrained environment. We use a probabilistic model to fuse the color and motion information to localize the body parts and employ multiple hypothesis tracking (MHT) algorithm to track these features simultaneously. In real world scenes, extracted features usually contains spurious measurements which create unconvincing trajectories and needless computations. To deal with this problem we incorporated a path coherence function along with MHT to reduce the number of hypotheses which in turn reduce the computational cost and improve the structure of the trajectories. The performance of the framework has been validated using experiments on synthetic and real sequence of images.