GrabcutD: improved grabcut using depth information

  • Authors:
  • Karthikeyan Vaiapury;Anil Aksay;Ebroul Izquierdo

  • Affiliations:
  • Queen Mary University of London, London, United Kingdom;Queen Mary University of London, London, United Kingdom;Queen Mary University of London, London, United Kingdom

  • Venue:
  • Proceedings of the 2010 ACM workshop on Surreal media and virtual cloning
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Popular state of the art segmentation methods such as Grab cut include a matting technique to calculate the alpha values for boundaries of segmented regions. Conventional Grabcut relies only on color information to achieve segmentation. Recently, there have been attempts to improve Grabcut using motion in video sequences. However, in stereo or multi-view analysis, there is additional information that could be also used to improve segmentation. Clearly, depth based approaches bear the potential discriminative power of ascertaining whether the object is nearer of farer. In this work, we propose and evaluate a Grabcut segmentation technique based on combination of color and depth information. We show the usefulness of the approach when stereo information is available and evaluate it using standard datasets against state of the art results.