A montage method: the overlaying of the computer generated images onto a background photograph
SIGGRAPH '86 Proceedings of the 13th annual conference on Computer graphics and interactive techniques
Simulating global illumination using adaptive meshing
Simulating global illumination using adaptive meshing
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
Acquiring a Radiance Distribution to Superimpose Virtual Objects onto a Real Scene
IEEE Transactions on Visualization and Computer Graphics
Interactive Common Illumination for Computer Augmented Reality
Proceedings of the Eurographics Workshop on Rendering Techniques '97
Photorealistic rendering for augmented reality using environment illumination
ISMAR '03 Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality
Proceedings of the 2005 symposium on Interactive 3D graphics and games
Wavelet importance sampling: efficiently evaluating products of complex functions
ACM SIGGRAPH 2005 Papers
Estimation of few light sources from environment maps for fast realistic rendering
Proceedings of the 2005 international conference on Augmented tele-existence
A median cut algorithm for light probe sampling
SIGGRAPH '05 ACM SIGGRAPH 2005 Posters
Importance sampling for video environment maps
SIGGRAPH '05 ACM SIGGRAPH 2005 Sketches
Non-interleaved deferred shading of interleaved sample patterns
GH '06 Proceedings of the 21st ACM SIGGRAPH/EUROGRAPHICS symposium on Graphics hardware
Consistent interactive augmentation of live camera images with correct near-field illumination
Proceedings of the 2007 ACM symposium on Virtual reality software and technology
Imperfect shadow maps for efficient computation of indirect illumination
ACM SIGGRAPH Asia 2008 papers
KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
Proceedings of the 24th annual ACM symposium on User interface software and technology
Texture-less object tracking with online training using an RGB-D camera
ISMAR '11 Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality
KinectFusion: Real-time dense surface mapping and tracking
ISMAR '11 Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality
RGB-D camera-based parallel tracking and meshing
ISMAR '11 Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality
Photorealistic rendering for Augmented Reality: A global illumination and BRDF solution
VR '10 Proceedings of the 2010 IEEE Virtual Reality Conference
The State of the Art in Interactive Global Illumination
Computer Graphics Forum
Incremental instant radiosity for real-time indirect illumination
EGSR'07 Proceedings of the 18th Eurographics conference on Rendering Techniques
Editorial: Foreword to special section on augmented reality
Computers and Graphics
The shading probe: fast appearance acquisition for mobile AR
SIGGRAPH Asia 2013 Technical Briefs
Hi-index | 0.00 |
In this paper we present a novel plausible rendering method for mixed reality systems, which is useful for many real-life application scenarios, like architecture, product visualization or edutainment. To allow virtual objects to seamlessly blend into the real environment, the real lighting conditions and the mutual illumination effects between real and virtual objects must be considered, while maintaining interactive frame rates. The most important such effects are indirect illumination and shadows cast between real and virtual objects. Our approach combines Instant Radiosity and Differential Rendering. In contrast to some previous solutions, we only need to render the scene once in order to find the mutual effects of virtual and real scenes. In addition, we avoid artifacts like double shadows or inconsistent color bleeding which appear in previous work. The dynamic real illumination is derived from the image stream of a fish-eye lens camera. The scene gets illuminated by virtual point lights, which use imperfect shadow maps to calculate visibility. A sufficiently fast scene reconstruction is done at run-time with Microsoft's Kinect sensor. Thus, a time-consuming manual pre-modeling step of the real scene is not necessary. Our results show that the presented method highly improves the illusion in mixed-reality applications and significantly diminishes the artificial look of virtual objects superimposed onto real scenes.