Multi-component models for object detection

  • Authors:
  • Chunhui Gu;Pablo Arbeláez;Yuanqing Lin;Kai Yu;Jitendra Malik

  • Affiliations:
  • Google Inc., Mountain View, CA;UC Berkeley, Berkeley, CA;NEC Labs America, Cupertino, CA;Baidu Inc., Beijing, China;UC Berkeley, Berkeley, CA

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a multi-component approach for object detection. Rather than attempting to represent an object category with a monolithic model, or pre-defining a reduced set of aspects, we form visual clusters from the data that are tight in appearance and configuration spaces. We train individual classifiers for each component, and then learn a second classifier that operates at the category level by aggregating responses from multiple components. In order to reduce computation cost during detection, we adopt the idea of object window selection, and our segmentation-based selection mechanism produces fewer than 500 windows per image while preserving high object recall. When compared to the leading methods in the challenging VOC PASCAL 2010 dataset, our multi-component approach obtains highly competitive results. Furthermore, unlike monolithic detection methods, our approach allows the transfer of finer-grained semantic information from the components, such as keypoint location and segmentation masks.