Learning and incorporating top-down cues in image segmentation

  • Authors:
  • Xuming He;Richard S. Zemel;Debajyoti Ray

  • Affiliations:
  • Department of Computer Science, University of Toronto;Department of Computer Science, University of Toronto;Department of Computer Science, University of Toronto

  • Venue:
  • ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bottom-up approaches, which rely mainly on continuity principles, are often insufficient to form accurate segments in natural images. In order to improve performance, recent methods have begun to incorporate top-down cues, or object information, into segmentation. In this paper, we propose an approach to utilizing category-based information in segmentation, through a formulation as an image labelling problem. Our approach exploits bottom-up image cues to create an over-segmented representation of an image. The segments are then merged by assigning labels that correspond to the object category. The model is trained on a database of images, and is designed to be modular: it learns a number of image contexts, which simplify training and extend the range of object classes and image database size that the system can handle. The learning method estimates model parameters by maximizing a lower bound of the data likelihood. We examine performance on three real-world image databases, and compare our system to a standard classifier and other conditional random field approaches, as well as a bottom-up segmentation method.