Automatic 3D object segmentation in multiple views using volumetric graph-cuts

  • Authors:
  • N. D. F. Campbell;G. Vogiatzis;C. Hernández;R. Cipolla

  • Affiliations:
  • University of Cambridge, Department of Engineering, Cambridge, Cambridgeshire CB2 1PZ, UK;Toshiba Research Europe, 208 Cambridge Science Park, Milton Road, Cambridge, CB4 0GZ, UK;Toshiba Research Europe, 208 Cambridge Science Park, Milton Road, Cambridge, CB4 0GZ, UK;University of Cambridge, Department of Engineering, Cambridge, Cambridgeshire CB2 1PZ, UK

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose an algorithm for automatically obtaining a segmentation of a rigid object in a sequence of images that are calibrated for camera pose and intrinsic parameters. Until recently, the best segmentation results have been obtained by interactive methods that require manual labelling of image regions. Our method requires no user input but instead relies on the camera fixating on the object of interest during the sequence. We begin by learning a model of the object's colour, from the image pixels around the fixation points. We then extract image edges and combine these with the object colour information in a volumetric binary MRF model. The globally optimal segmentation of 3D space is obtained by a graph-cut optimisation. From this segmentation an improved colour model is extracted and the whole process is iterated until convergence. Our first finding is that the fixation constraint, which requires that the object of interest is more or less central in the image, is enough to determine what to segment and initialise an automatic segmentation process. Second, we find that by performing a single segmentation in 3D, we implicitly exploit a 3D rigidity constraint, expressed as silhouette coherency, which significantly improves silhouette quality over independent 2D segmentations. We demonstrate the validity of our approach by providing segmentation results on real sequences.