Forward non-rigid motion tracking for facial MoCap

  • Authors:
  • Xiaoyong Fang;Xiaopeng Wei;Qiang Zhang;Dongsheng Zhou

  • Affiliations:
  • Key Laboratory of Advanced Design and Intelligent Computing (Dalian University), Ministry of Education, Dalian, China 116622 and School of Computer and Information Science, Hunan Institute of Tech ...;Key Laboratory of Advanced Design and Intelligent Computing (Dalian University), Ministry of Education, Dalian, China 116622;Key Laboratory of Advanced Design and Intelligent Computing (Dalian University), Ministry of Education, Dalian, China 116622;Key Laboratory of Advanced Design and Intelligent Computing (Dalian University), Ministry of Education, Dalian, China 116622

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

For the existing motion capture (MoCap) data processing methods, manual interventions are always inevitable, most of which are derived from the data tracking process. This paper addresses the problem of tracking non-rigid 3D facial motions from sequences of raw MoCap data in the presence of noise, outliers and long time missing. We present a novel dynamic spatiotemporal framework to automatically solve the problem. First, based on a 3D facial topological structure, a sophisticated non-rigid motion interpreter (SNRMI) is put forward; together with a dynamic searching scheme, it cannot only track the non-missing data to the maximum extent but recover missing data (it can accurately recover more than five adjacent markers under long time (about 5 seconds) missing) accurately. To rule out wrong tracks of the markers labeled in open structures (such as mouth, eyes), a semantic-based heuristic checking method was raised. Second, since the existing methods have not taken the noise propagation problem into account, a forward processing framework is presented to solve the problem. Another contribution is the proposed method could track facial non-rigid motions automatically and forward, and is believed to greatly reduce even eliminate the requirements of human interventions during the facial MoCap data processing. Experimental results proved the effectiveness, robustness and accuracy of our system.