Posture reconstruction method for mapping joint angles of motion capture experiments to simulation models

  • Authors:
  • Jared Gragg;Jingzhou Yang;Robyn Boothby

  • Affiliations:
  • Human-Centric Design Research Laboratory, Department of Mechanical Engineering, Texas Tech University, Lubbock, TX;Human-Centric Design Research Laboratory, Department of Mechanical Engineering, Texas Tech University, Lubbock, TX;Human-Centric Design Research Laboratory, Department of Mechanical Engineering, Texas Tech University, Lubbock, TX

  • Venue:
  • ICDHM'11 Proceedings of the Third international conference on Digital human modeling
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Motion capture experiments are often used in coordination with digital human modeling to offer insight into the simulation of real-world tasks or as a means of validating existing simulations. However, there is a gap between the motion capture experiments and the simulation models, because the motion capture system is based on Cartesian space while the simulation models are based on joint space. This paper bridges the gap and presents a methodology that enables one to map joint angles of motion capture experiments to simulation models in order to obtain the same posture. The posture reconstruction method is an optimization-based approach where the cost function is a constant and constraints include (1) the distances between simulation model joint centers and the corresponding experimental subject joint centers are equal to zeros; (2) all joint angles are within joint limits. Examples are used to demonstrate the effectiveness of the proposed method.