View-invariant human feature extraction for video-surveillance applications

  • Authors:
  • Gregory Rogez;J. J. Guerrero;Carlos Orrite

  • Affiliations:
  • Computer Vision Lab - I3A, University of Zaragoza, Spain;Robotics, Perception and Real Time Group, I3A, University of Zaragoza, Spain;Computer Vision Lab - I3A, University of Zaragoza, Spain

  • Venue:
  • AVSS '07 Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a view-invariant human feature extractor (shape+pose) for pedestrian monitoring in man-made environments. Our approach can be divided into 2 steps: firstly, a series of view-based models is built by discretizing the viewpoint with respect to the camera into several training views. During the online stage, the Homography that relates the image points to the closest and most adequate training plane is calculated using the dominant 3D directions. The input image is then warped to this training view and processed using the corresponding view-based model. After model fitting, the inverse transformation is performed on the resulting human features obtaining a segmented silhouette and a 2D pose estimation in the original input image. Experimental results demonstrate our system performs well, independently of the direction of motion, when it is applied to monocular sequences with high perspective effect.