Augmented Reality: Handheld Augmented Reality involving gravity measurements

  • Authors:
  • Daniel Kurz;Selim Benhimane

  • Affiliations:
  • metaio GmbH, Infanteriestraíe 19, Haus 4b, 80797 Munich, Germany;metaio GmbH, Infanteriestraíe 19, Haus 4b, 80797 Munich, Germany

  • Venue:
  • Computers and Graphics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This article is a revised version of an earlier work on Gravity-Aware Handheld Augmented Reality (AR) (Kurz and Benhimane, 2011 [1]), which investigates how different stages in handheld AR applications can benefit from knowing the direction of the gravity measured with inertial sensors. It presents approaches to improve the description and matching of feature points, detection and tracking of planar templates, and the visual quality of the rendering of virtual 3D objects by incorporating the gravity vector. In handheld AR, both the camera and the display are located in the user's hand and therefore can be freely moved. The pose of the camera is generally determined with respect to piecewise planar objects that have a static and known orientation with respect to gravity. In the presence of (close to) vertical surfaces, we show how Gravity-Aligned Feature Descriptors (GAFDs) improve the initialization of tracking algorithms relying on feature point descriptor-based approaches in terms of quality and performance. For (close to) horizontal surfaces, we propose to use the gravity vector to rectify the camera image and detect and describe features in the rectified image. The resulting Gravity-Rectified Feature Descriptors (GREFDs) provide an improved precision-recall characteristic and enable faster initialization, in particular under steep viewing angles. Gravity-rectified camera images also allow for real-time 6 DoF pose estimation using an edge-based object detection algorithm handling only 4 DoF similarity transforms. Finally, the rendering of virtual 3D objects can be made more realistic and plausible by taking into account the orientation of the gravitational force in addition to the relative pose between the handheld device and a real object. In comparison to the original paper, this work provides a more elaborate evaluation of the presented algorithms. We propose a method enabling the evaluation of inertial-sensor aided visual tracking methods without real inertial sensor data. By synthesizing gravity measurements from ground truth camera poses, we benchmark our algorithms on a large existing dataset. Based on this approach, we also develop and evaluate a gravity-adaptive approach that performs image-rectification only when beneficial.