Resolving Multiple Occluded Layers in Augmented Reality

  • Authors:
  • Mark A. Livingston;J. Edward Swan II;Joseph L. Gabbard;Tobias H. Höllerer;Deborah Hix;Simon J. Julier;Yohan Baillot;Dennis Brown

  • Affiliations:
  • -;-;-;-;-;-;-;-

  • Venue:
  • ISMAR '03 Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

A useful function of augmented reality (AR) systems istheir ability to visualize occluded infrastructure directly ina user's view of the environment. This is especially importantfor our application context, which utilizes mobile ARfor navigation and other operations in an urban environment.A key problem in the AR field is how to best depictoccluded objects in such a way that the viewer can correctlyinfer the depth relationships between different physical andvirtual objects. Showing a single occluded object with nodepth context presents an ambiguous picture to the user. Butshowing all occluded objects in the environments leads tothe "Superman's X-ray vision" problem, in which the usersees too much information to make sense of the depth relationshipsof objects.Our efforts differ qualitatively from previous work in ARocclusion, because our application domain involves far-fieldoccluded objects, which are tens of meters distant fromthe user. Previous work has focused on near-field occludedobjects, which are within or just beyond arm's reach, andwhich use different perceptual cues. We designed and evaluateda number of sets of display attributes. We then conducteda user study to determine which representations bestexpress occlusion relationships among far-field objects. Weidentify a drawing style and opacity settings that enable theuser to accurately interpret three layers of occluded objects,even in the absence of perspective constraints.