Cooperative estimation of human motion and surfaces using multiview videos

  • Authors:
  • Weilan Luo;Toshihiko Yamasaki;Kiyoharu Aizawa

  • Affiliations:
  • -;-;-

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a human motion tracking method that not only captures the motion of the skeleton model but also generates a sequence of surfaces using images acquired by multiple synchronized cameras. Our method extracts articulated postures with 42 degrees of freedom through a sequence of visual hulls. We seek a globally optimized solution for likelihood using local memorization of the ''fitness'' of each body segment. Our method efficiently avoids problems of local minima by using a mean combination and an articulated combination of particles selected according to the weights of the different body segments. The surface is produced by deforming the template and the details are recovered by fitting the deformed surface to 2D silhouette rims. The extracted posture and estimated surface are cooperatively refined by registering the corresponding body segments. In our experiments, the mean error between the samples of the deformed reference model and the target is about 2cm and the mean matching difference between the images projected by the estimated surfaces and the original images is about 6%.