MRF guided anisotropic depth diffusion for kinect range image enhancement

  • Authors:
  • Karthik Mahesh Varadarajan;Markus Vincze

  • Affiliations:
  • TU Wien, Vienna, Austria;TU Wien, Vienna, Austria

  • Venue:
  • ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume 2
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Projected texture based 3D sensing modalities are being increasingly used for a variety of 3D computer vision applications. However, these sensing modalities, exemplified by the Microsoft Kinect Sensor, suffer from severe drawbacks that hamper the quality of the range estimate output from the sensor. It is well known that the quality of reconstruction of the 3D projected texture for range estimation is a function of the material properties of objects in the image. Objects colored black, yellow or deep red often do not reflect the texture in a manner suitable for the detector to estimate the range values. Furthermore, shiny or highly reflective objects can also scatter the projected texture patterns. Objects with skewed surface orientation, occlusions, object self-shadows and intra-object mutual shadows, transparency and other factors also create problems with projected texture reconstruction. In order to alleviate these concerns, depth interpolation techniques have been used in the past. These techniques, however, create loss of depth structures crucial for segmentation and detection processes. In order to alleviate these concerns, we present a novel MRF based color- depth fusion algorithm which uses information from the RGB sensor of the Kinect and couples it with the depth content to produce fine structure, high fidelity depth maps. This algorithm can be implemented in hardware on the Kinect device, thereby improving the depth resolution, fidelity of the sensor while eliminating range errors and shadows.