SLEDGE: Sequential Labeling of Image Edges for Boundary Detection

  • Authors:
  • Nadia Payet;Sinisa Todorovic

  • Affiliations:
  • School of EECS, Oregon State University Kelley Engineering Building, Corvallis, USA;School of EECS, Oregon State University Kelley Engineering Building, Corvallis, USA

  • Venue:
  • International Journal of Computer Vision
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Our goal is to detect boundaries of objects or surfaces occurring in an arbitrary image. We present a new approach that discovers boundaries by sequential labeling of a given set of image edges. A visited edge is labeled as on or off a boundary, based on the edge's photometric and geometric properties, and evidence of its perceptual grouping with already identified boundaries. We use both local Gestalt cues (e.g., proximity and good continuation), and the global Helmholtz principle of non-accidental grouping. A new formulation of the Helmholtz principle is specified as the entropy of a layout of image edges. For boundary discovery, we formulate a new, policy iteration algorithm, called SLEDGE. Training of SLEDGE is iterative. In each training image, SLEDGE labels a sequence of edges, which induces loss with respect to the ground truth. These sequences are then used as training examples for learning SLEDGE in the next iteration, such that the total loss is minimized. For extracting image edges that are input to SLEDGE, we use our new, low-level detector. It finds salient pixel sequences that separate distinct textures within the image. On the benchmark Berkeley Segmentation Datasets 300 and 500, our approach proves robust and effective. We outperform the state of the art both in recall and precision for different input sets of image edges.