Unsupervised skeleton extraction and motion capture from 3D deformable matching

  • Authors:
  • Quanshi Zhang;Xuan Song;Xiaowei Shao;Ryosuke Shibasaki;Huijing Zhao

  • Affiliations:
  • Center for Spatial Information Science, University of Tokyo, Japan;Center for Spatial Information Science, University of Tokyo, Japan;Center for Spatial Information Science, University of Tokyo, Japan;Center for Spatial Information Science, University of Tokyo, Japan;Key Laboratory of Machine Perception (MoE), Peking University, China

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a novel method to extract skeletons of complex articulated objects from 3D point cloud sequences collected by the Kinect. Our approach is more robust than the traditional video-based and stereo-based approaches, as the Kinect directly provides 3D information without any markers, 2D-to-3D-transition assumptions, and feature point extraction. We track all the raw 3D points on the object, and utilize the point trajectories to determine the object skeleton. The point tracking is achieved by the 3D non-rigid matching based on the Markov Random Field (MRF) Deformation Model. To reduce the large computational cost of the non-rigid matching, a coarse-to-fine procedure is proposed. To the best of our knowledge, this is the first to extract skeletons of highly deformable objects from 3D point cloud sequences by point tracking. Experiments prove our method's good performance, and the extracted skeletons are successfully applied to the motion capture.