Towards scalable view-invariant gait recognition: multilinear analysis for gait

  • Authors:
  • Chan-Su Lee;Ahmed Elgammal

  • Affiliations:
  • Department of Computer Science, Rutgers University, Piscataway, NJ;Department of Computer Science, Rutgers University, Piscataway, NJ

  • Venue:
  • AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we introduce a novel approach for learning view-invariant gait representation that does not require synthesizing particular views or any camera calibration. Given walking sequences captured from multiple views for multiple people, we fit a multilinear generative model using higher-order singular value decomposition which decomposes view factors, body configuration factors, and gait-style factors. Gait-style is a view-invariant, time-invariant, and speed-invariant gait signature that can then be used in recognition. In the recognition phase, a new walking cycle of unknown person in unknown view is automatically aligned to the learned model and then iterative procedure is used to solve for both the gait-style parameter and the view. The proposed framework allows for scalability to add a new person to already learned model even if a single cycle of a single view is available.