People tracking and segmentation using spatiotemporal shape constraints

  • Authors:
  • Junqiu Wang;Yasushi Makihara;Yasushi Yagi

  • Affiliations:
  • Osaka University, Ibaraki, Japan;Osaka University, Ibaraki, Japan;Osaka University, Ibaraki, Japan

  • Venue:
  • VNBA '08 Proceedings of the 1st ACM workshop on Vision networks for behavior analysis
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an efficient people tracking and segmentation algorithm for gait recognition. Even though most existing gait recognition algorithms assume that people have been tracked and that silhouettes are available for gait classification, tracking and segmentation are very difficult especially for articulated objects such as human beings. We improve the performance of tracking and segmentation based on spatiotemporal shape constraints. First of all, we track people using an adaptive mean-shift tracker which produces initial results consisting of bounding boxes and foreground likelihood images. The initial results, generally speaking, are not accurate enough to be applied in gait recognition directly. We refine the results by matching with silhouette templates sequences in a batch mode to find the optimal silhouette-based gait paths corresponding to the input. Since the process is computationally expensive, we propose a novel efficient distance computation method to accelerate the spatiotemporal silhouette matching. The spatiotemporal shape priors are embedded into the Min-Cut algorithm to segment people out. Experiments on indoor and outdoor sequences demonstrate the effectiveness of the proposed approach.