Vision-Based Motion Capture of Interacting Multiple People

  • Authors:
  • Hiroaki Egashira;Atsushi Shimada;Daisaku Arita;Rin-Ichiro Taniguchi

  • Affiliations:
  • Department of Intelligent Systems Kyushu University, Fukuoka, Japan 819-0395;Department of Intelligent Systems Kyushu University, Fukuoka, Japan 819-0395;Department of Intelligent Systems Kyushu University, Fukuoka, Japan 819-0395 and Institute of Systems, Information Technologies and Nanotechnologies, Fukuoka, Japan 814-0001;Department of Intelligent Systems Kyushu University, Fukuoka, Japan 819-0395

  • Venue:
  • ICIAP '09 Proceedings of the 15th International Conference on Image Analysis and Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Vision-based motion capture is getting popular for acquiring human motion information in various interactive applications. To enlarge its applicability, we have been developing a vision-based motion capture system which can estimate the postures of multiple people simultaneously using multiview image analysis. Our approach is divided into the following two phases: at first, extraction, or segmentation, of each person in input multiview images; then, posture analysis for one person is applied to the segmented region of each person. The segmentation is realized in the voxel space, which is reconstructed by visual cone intersection of multiview silhouettes. Here, a graph cut algorithm is employed to achieve optimal segmentation. Posture analysis is based on a model-based approach, where a skeleton model of human figure is matched with the multiview silhouettes based on a particle filter and physical constraints on human body movement. Several experimental studies show that the proposed method acquires human postures of multiple people correctly and efficiently even when they touch each otter.