Person identification using full-body motion and anthropometric biometrics from kinect videos

  • Authors:
  • Brent C. Munsell;Andrew Temlyakov;Chengzheng Qu;Song Wang

  • Affiliations:
  • Claflin University, Orangeburg, SC;University of South Carolina, Columbia, SC;University of South Carolina, Columbia, SC;University of South Carolina, Columbia, SC

  • Venue:
  • ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part III
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

For person identification, motion and anthropometric biometrics are known to be less sensitive to photometric differences and more robust to obstructions such as glasses, hair, and hats. Existing gait-based methods are based on the accurate identification and acquisition of the gait cycle. This typically requires the subject to repeatedly perform a single action using a costly motion-capture facility, or 2D videos in simple backgrounds where the person can be easily segmented and tracked. For person identification these manufactured requirements limit the use gait-based biometrics in real scenarios that may have a variety of actions with varying levels of complexity. We propose a new person identification method that uses motion and anthropometric biometrics acquired from an inexpensive Kinect RGBD sensor. Different from previous gait-based methods we use all the body joints found by the Kinect SDK to analyze the motion patterns and anthropometric features over the entire track sequence. We show the proposed method can identify people that perform different actions (e.g. walk and run) with varying levels of complexity. When compared to a state-of-the-art gait-based method that uses depth images produced by the Kinect sensor the proposed method demonstrated better person identity performance.