Multiview fusion for canonical view generation based on homography constraints

  • Authors:
  • Ambrish Tyagi;James W. Davis;Mark Keck

  • Affiliations:
  • Ohio State University;Ohio State University;Ohio State University

  • Venue:
  • Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Activity and gait recognition are among the various applications that necessitate view-specific input. In a real surveillance scenario it is impractical to assume that the desired canonical view will always be available. We present a framework to generate the canonical view of any translating object in a scene monitored by multiple cameras. The method is capable of recovering this view despite the fact that none of the cameras can see it individually. In this two step process, first the camera and scene geometry is used to identify the sagittal plane of the object, which is used to define the canonical view. Next, each original view is warped to the canonical view through planar homographies learnt from geometric constraints. The warped images are then combined by way of evidence fusion to recover the shape energy map which is used to obtain the final binary silhouette of the object's shape. Results presented for various indoor and outdoor sequences demonstrate the efficacy of this method in generating the shape of the object as seen from the canonical view, while resolving occlusions.