Implicit 3D modeling and tracking for anywhere augmentation

  • Authors:
  • Sehwan Kim;Stephen DiVerdi;Jae Sik Chang;Taehyuk Kang;Ronald Iltis;Tobias Höllerer

  • Affiliations:
  • University of California, Santa Barbara, CA;University of California, Santa Barbara, CA;University of California, Santa Barbara, CA;University of California, Santa Barbara, CA;University of California, Santa Barbara, CA;University of California, Santa Barbara, CA

  • Venue:
  • Proceedings of the 2007 ACM symposium on Virtual reality software and technology
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an online 3D modeling and tracking methodology that uses aerial photographs for mobile augmented reality. Instead of relying on models which are created in advance, the system generates a 3D model for a real building on the fly by combining frontal and aerial views with the help of an optical sensor, an inertial sensor, a GPS unit and a few mouse clicks. A user's initial pose is estimated using an aerial photograph, which is retrieved from a database according to the user's GPS coordinates, and an inertial sensor which measures pitch. To track the user's position and orientation in real-time, feature-based tracking is carried out based on salient points on the edges and the sides of a building the user is keeping in view. We implemented camera pose estimators using both a least squares and an unscented Kalman filter (UKF) approach. The UKF approach results in more stable and reliable vision-based tracking. We evaluate the speed and accuracy of both approaches, and we demonstrate the usefulness of our computations as important building blocks for an Anywhere Augmentation scenario.