Shake'n'sense: reducing interference for overlapping structured light depth cameras
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Perifoveal display: combining foveal and peripheral vision in one visualization
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
OmniKinect: real-time dense volumetric data acquisition and applications
Proceedings of the 18th ACM symposium on Virtual reality software and technology
3D helping hands: a gesture based MR system for remote collaboration
Proceedings of the 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry
Multiple structured light-based depth sensors for human motion analysis: a review
IWAAL'12 Proceedings of the 4th international conference on Ambient Assisted Living and Home Care
Proceedings of the 10th European Conference on Visual Media Production
RemoteFusion: real time depth camera fusion for remote collaboration on physical tasks
Proceedings of the 12th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
We present a method for reducing interference between multiple structured light-based depth sensors operating in the same spectrum with rigidly attached projectors and cameras. A small amount of motion is applied to a subset of the sensors so that each unit sees its own projected pattern sharply, but sees a blurred version of the patterns of other units. If high spacial frequency patterns are used, each sensor sees its own pattern with higher contrast than the patterns of other units, resulting in simplified pattern disambiguation. An analysis of this method is presented for a group of commodity Microsoft Kinect color-plus-depth sensors with overlapping views. We demonstrate that applying a small vibration with a simple motor to a subset of the Kinect sensors results in reduced interference, as manifested as holes and noise in the depth maps. Using an array of six Kinects, our system reduced interference-related missing data from from 16.6% to 1.4% of the total pixels. Another experiment with three Kinects showed an 82.2% percent reduction in the measurement error introduced by interference. A side-effect is blurring in the color images of the moving units, which is mitigated with post-processing. We believe our technique will allow inexpensive commodity depth sensors to form the basis of dense large-scale capture systems.