A study on the manipulation of 2D objects in a projector/camera-based augmented reality environment

  • Authors:
  • Stephen Voida;Mark Podlaseck;Rick Kjeldsen;Claudio Pinhanez

  • Affiliations:
  • Georgia Institute of Technology, Atlanta, Georgia;IBM Research, T.J. Watson, Hawthorne, New York;IBM Research, T.J. Watson, Hawthorne, New York;IBM Research, T.J. Watson, Hawthorne, New York

  • Venue:
  • Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

Are the object manipulation techniques traditionally used in head-mounted displays (HMDs) applicable to augmented reality based projection systems? This paper examines the differences between HMD- and projector/camera-based AR interfaces in the light of a manipulation task involving documents and applications projected on common office surfaces such as tables, walls, cabinets, and floor. We report a Wizard of Oz study where subjects were first asked to create gesture/voice commands to move 2D objects on those surfaces and then exposed to gestures created by the authors. Among the options, subjects could select the object to be manipulated using voice command; touching, pointing, and grabbing gesture; or a virtual mouse. The results show a strong preference for a manipulation interface based on pointing gestures using small hand movements and involving minimal body movement. Direct touching of the object was also common when the object being manipulated was within the subjects' arm reach. Based on these results, we expect that the preferred interface resembles, in many ways, the egocentric model traditionally used in AR.