Resolving occlusion in augmented reality
I3D '95 Proceedings of the 1995 symposium on Interactive 3D graphics
Recent Advances in Augmented Reality
IEEE Computer Graphics and Applications
Incorporating dynamic real objects into immersive virtual environments
I3D '03 Proceedings of the 2003 symposium on Interactive 3D graphics
Detecting dynamic occlusion in front of static backgrounds for AR scenes
EGVE '03 Proceedings of the workshop on Virtual environments 2003
Resolving Occlusion in Augmented Reality: a Contour Based Approach without 3D Reconstruction
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
VR '00 Proceedings of the IEEE Virtual Reality 2000 Conference
The Use of Dense Stereo Range Data in Augmented Reality
ISMAR '02 Proceedings of the 1st International Symposium on Mixed and Augmented Reality
Marker Tracking and HMD Calibration for a Video-Based Augmented Reality Conferencing System
IWAR '99 Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality
OpenGL(R) Shading Language (2nd Edition)
OpenGL(R) Shading Language (2nd Edition)
Occlusion handling for medical augmented reality using a volumetric phantom model
Proceedings of the ACM symposium on Virtual reality software and technology
Presence: Teleoperators and Virtual Environments
Depth Imaging by Combining Time-of-Flight and On-Demand Stereo
Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging
A realistic augmented reality racing game using a depth-sensing camera
Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry
Hi-index | 0.00 |
One of the main problems of monoscopic video see-through augmented reality (AR) is the lack of reliable depth information. This makes it difficult to correctly represent complex spatial interactions between real and virtual objects, e.g., when rendering shadows. The most obvious graphical artifact is the incorrect display of the occlusion of virtual models by real objects. Since the graphical models are rendered opaquely over the camera image, they always appear to occlude all objects in the real environment, regardless of the actual spatial relationship. In this paper, we propose to utilize a new type of hardware in order to solve some of the basic challenges of AR rendering. We introduce a depth-of-flight range sensor into AR, which produces a 2D map of the distances to real objects in the environment. The distance map is registered with high resolution color images delivered by a digital video camera. When displaying the virtual models in AR, the distance map is used in order to decide whether the camera image or the virtual object is visible at any position. This way, the occlusion of virtual models by real objects can be correctly represented. Preliminary results obtained with our approach show that a useful occlusion handling based on time-of-flight range data is possible.