Cross-view and multi-view gait recognitions based on view transformation model using multi-layer perceptron

  • Authors:
  • Worapan Kusakunniran;Qiang Wu;Jian Zhang;Hongdong Li

  • Affiliations:
  • School of Computer Science and Engineering, University of New South Wales, Australia and NICTA, National ICT Australia, Australia;iNEXT, UTS Research Centre for Innovative in IT Services and Applications, University of Technology Sydney, Australia;School of Computer Science and Engineering, University of New South Wales, Australia and NICTA, National ICT Australia, Australia;Research School of Information Sciences and Engineering, Australian National University, Australia

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2012

Quantified Score

Hi-index 0.10

Visualization

Abstract

Gait has been shown to be an efficient biometric feature for human identification at a distance. However, performance of gait recognition can be affected by view variation. This leads to a consequent difficulty of cross-view gait recognition. A novel method is proposed to solve the above difficulty by using view transformation model (VTM). VTM is constructed based on regression processes by adopting multi-layer perceptron (MLP) as a regression tool. VTM estimates gait feature from one view using a well selected region of interest (ROI) on gait feature from another view. Thus, trained VTMs can normalize gait features from across views into the same view before gait similarity is measured. Moreover, this paper proposes a new multi-view gait recognition which estimates gait feature on one view using selected gait features from several other views. Extensive experimental results demonstrate that the proposed method significantly outperforms other baseline methods in literature for both cross-view and multi-view gait recognitions. In our experiments, particularly, average accuracies of 99%, 98% and 93% are achieved for multiple views gait recognition by using 5 cameras, 4 cameras and 3 cameras respectively.