TriCoS: a tri-level class-discriminative co-segmentation method for image classification

  • Authors:
  • Yuning Chai;Esa Rahtu;Victor Lempitsky;Luc Van Gool;Andrew Zisserman

  • Affiliations:
  • Computer Vision Group, ETH Zurich, Switzerland;Machine Vision Group, University of Oulu, Finland;Yandex, Russia;Computer Vision Group, ETH Zurich, Switzerland;Visual Geometry Group, University of Oxford, United Kingdom

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The aim of this paper is to leverage foreground segmentation to improve classification performance on weakly annotated datasets – those with no additional annotation other than class labels. We introduce TriCoS, a new co-segmentation algorithm that looks at all training images jointly and automatically segments out the most class-discriminative foregrounds for each image. Ultimately, those foreground segmentations are used to train a classification system. TriCoS solves the co-segmentation problem by minimizing losses at three different levels: the category level for foreground/background consistency across images belonging to the same category, the image level for spatial continuity within each image, and the dataset level for discrimination between classes. In an extensive set of experiments, we evaluate the algorithm on three benchmark datasets: the UCSD-Caltech Birds-200-2010, the Stanford Dogs, and the Oxford Flowers 102. With the help of a modern image classifier, we show superior performance compared to previously published classification methods and other co-segmentation methods.