Depth-supported real-time video segmentation with the Kinect

  • Authors:
  • Alexey Abramov;Karl Pauwels;Jeremie Papon;Florentin Worgotter;Babette Dellen

  • Affiliations:
  • Georg-August University, BCCN Göttingen, III Physikalisches Institut, Germany;Computer Architecture and Technology Department, University of Granada, Spain;Georg-August University, BCCN Göttingen, III Physikalisches Institut, Germany;Georg-August University, BCCN Göttingen, III Physikalisches Institut, Germany;Institut de Robòtica i Informàtica Industrial (CSIC-UPC), Barcelona, Spain

  • Venue:
  • WACV '12 Proceedings of the 2012 IEEE Workshop on the Applications of Computer Vision
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a real-time technique for the spatiotemporal segmentation of color/depth movies. Images are segmented using a parallel Metropolis algorithm implemented on a GPU utilizing both color and depth information, acquired with the Microsoft Kinect. Segments represent the equilibrium states of a Potts model, where tracking of segments is achieved by warping obtained segment labels to the next frame using real-time optical flow, which reduces the number of iterations required for the Metropolis method to encounter the new equilibrium state. By including depth information into the framework, true objects boundaries can be found more easily, improving also the temporal coherency of the method. The algorithm has been tested for videos of medium resolutions showing human manipulations of objects. The framework provides an inexpensive visual front end for visual preprocessing of videos in industrial settings and robot labs which can potentially be used in various applications.