Human motion tracking by combining view-based and model-based methods for monocular video sequences

  • Authors:
  • Jihun Park;Sangho Park;J. K. Aggarwal

  • Affiliations:
  • Department of Computer Engineering, Hongik University, Seoul, Korea;Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX;Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX

  • Venue:
  • ICCSA'03 Proceedings of the 2003 international conference on Computational science and its applications: PartIII
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Reliable tracking of moving humans is essential to motion estimation, video surveillance and human-computer interface. This paper presents a new approach to human motion tracking that combines view-based and model-based techniques. Monocular color video is processed at both pixel level and object level. At the pixel level, a Gaussian mixture model is used to train and classify individual pixel colors. At the object level, a 3D human body model projected on a 2D image plane is used to fit the image data. Our method does not use inverse kinematics due to the singularity problem. While many others use stochastic sampling for model-based motion tracking, our method is purely dependent on parameter optimization. We convert the human motion tracking problem into a parameter optimization problem. A cost function for parameter optimization is used to estimate the degree of the overlapping between the foreground input image silhouette and a projected 3D model body silhouette. The overlapping is computed using computational geometry by converting a set of pixels from the image domain to a polygon in the real projection plane domain. Our method is used to recognize various human motions. Motion tracking results from video sequences are very encouraging.