Tour into the picture: using a spidery mesh interface to make animation from a single image
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
Rendering with concentric mosaics
Proceedings of the 26th annual conference on Computer graphics and interactive techniques
Interactive Construction of 3D Models from Panoramic Mosaics
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Authoring of Physical Models Using Mobile Computers
ISWC '01 Proceedings of the 5th IEEE International Symposium on Wearable Computers
ISWC '01 Proceedings of the 5th IEEE International Symposium on Wearable Computers
Interaction techniques for common tasks in immersive virtual environments: design, evaluation, and application
Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
"GrabCut": interactive foreground extraction using iterated graph cuts
ACM SIGGRAPH 2004 Papers
VIS '04 Proceedings of the conference on Visualization '04
ACM SIGGRAPH 2005 Papers
Pictorial Depth Cues for Outdoor Augmented Reality
ISWC '05 Proceedings of the Ninth IEEE International Symposium on Wearable Computers
Using aerial photographs for improved mobile AR annotation
ISMAR '06 Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality
Technical Section: Annotation in outdoor augmented reality
Computers and Graphics
Dense depth maps from sparse models and image coherence for augmented reality
Proceedings of the 18th ACM symposium on Virtual reality software and technology
IMAF: in situ indoor modeling and annotation framework on mobile phones
Personal and Ubiquitous Computing
Survey Representing information - Classifying the Augmented Reality presentation space
Computers and Graphics
Hi-index | 0.00 |
This paper presents methodology for integrating a small, single-point laser range finder into a wearable augmented reality system. We first present a way of creating object-aligned annotations with very little user effort. Second, we describe techniques to segment and pop-up foreground objects. Finally, we introduce a method using the laser range finder to incrementally build 3D panoramas from a fixed observer’s location. To build a 3D panorama semi-automatically, we track the system’s orientation and use the sparse range data acquired as the user looks around in conjunction with real-time image processing to construct geometry around the user’s position. Using full 3D panoramic geometry, it is possible for new virtual objects to be placed in the scene with proper lighting and occlusion by real world objects, which increases the expressivity of the AR experience.