Learning Exemplar-Based Categorization for the Detection of Multi-View Multi-Pose Objects

  • Authors:
  • Ying Shan;Feng Han;Harpreet S. Sawhney;Rakesh Kumar

  • Affiliations:
  • Sarnoff Corporation 201 Washington Road Princeton, NJ;Sarnoff Corporation 201 Washington Road Princeton, NJ;Sarnoff Corporation 201 Washington Road Princeton, NJ;Sarnoff Corporation 201 Washington Road Princeton, NJ

  • Venue:
  • CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a novel approach for multi-view multi-pose object detection using discriminative shapebased exemplars. The key idea underlying this method is motivated by numerous previous observations that manually clustering multi-view multi-pose training data into different categories and then combining the separately trained two-class classifiers greatly improved the detection performance. A novel computational framework is proposed to unify different processes of categorization, training individual classifier for each intra-class category, and training a strong classifier combining the individual classifiers. The individual processes employ a single objective function that is optimized using two nested AdaBoost loops. The outer AdaBoost loop is used to select discriminative exemplars and the inner AdaBoost is used to select discriminative features on the selected exemplars. The proposed approach replaces the manual time-consuming process of exemplar selection as well as addresses the problem of labeling ambiguity inherent in this process. Also, our approach fully complies with the standard AdaBoost-based object detection framework in terms of real-time implementation. Experiments on multi-view multi-pose people and vehicle data demonstrate the efficacy of the proposed approach.