Rapid controlled movement through a virtual 3D workspace
SIGGRAPH '90 Proceedings of the 17th annual conference on Computer graphics and interactive techniques
The Representation Space Paradigm of Concurrent Evolving Object Descriptions
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
Interacting with paper on the DigitalDesk
Communications of the ACM - Special issue on computer augmented environments: back to the real world
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Wizard of Oz studies: why and how
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
The marks are on the knowledge worker
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A survey of design issues in spatial input
UIST '94 Proceedings of the 7th annual ACM symposium on User interface software and technology
Reaching for objects in VR displays: lag and frame rate
ACM Transactions on Computer-Human Interaction (TOCHI)
Proceedings of the 1997 symposium on Interactive 3D graphics
Proceedings of the 25th annual conference on Computer graphics and interactive techniques
i-LAND: an interactive landscape for creativity and innovation
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Augmented surfaces: a spatially continuous work space for hybrid computing environments
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Rotating virtual objects with real handles
ACM Transactions on Computer-Human Interaction (TOCHI)
How do people organize their desks?: Implications for the design of office information systems
ACM Transactions on Information Systems (TOIS)
Support for multitasking and background awareness using interactive peripheral displays
Proceedings of the 14th annual ACM symposium on User interface software and technology
Towards preferences in virtual environment interfaces
EGVE '02 Proceedings of the workshop on Virtual environments 2002
Recent Advances in Augmented Reality
IEEE Computer Graphics and Applications
BlueSpace: personalizing workspace through awareness and adaptability
International Journal of Human-Computer Studies
XWand: UI for intelligent spaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Shader Lamps: Animating Real Objects With Image-Based Illumination
Proceedings of the 12th Eurographics Workshop on Rendering Techniques
The Everywhere Displays Projector: A Device to Create Ubiquitous Graphical Interfaces
UbiComp '01 Proceedings of the 3rd international conference on Ubiquitous Computing
On interfaces projected onto real-world objects
CHI '03 Extended Abstracts on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Steerable Interfaces for Pervasive Computing Spaces
PERCOM '03 Proceedings of the First IEEE International Conference on Pervasive Computing and Communications
iLamps: geometrically aware and self-configuring projectors
ACM SIGGRAPH 2003 Papers
VisionWand: interaction techniques for large displays using a passive wand tracked in 3D
Proceedings of the 16th annual ACM symposium on User interface software and technology
Dynamically reconfigurable vision-based user interfaces
ICVS'03 Proceedings of the 3rd international conference on Computer vision systems
Wizard of Oz Support throughout an Iterative Design Process
IEEE Pervasive Computing
The effects of interaction technique on coordination in tabletop groupware
GI '07 Proceedings of Graphics Interface 2007
SketchWizard: Wizard of Oz prototyping of pen-based user interfaces
Proceedings of the 20th annual ACM symposium on User interface software and technology
Proceedings of the 20th International Conference of the Association Francophone d'Interaction Homme-Machine
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Usability recommendations in the design of mixed interactive systems
Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems
DeskJockey: exploiting passive surfaces to display peripheral information
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction
A comparison of ray pointing techniques for very large displays
Proceedings of Graphics Interface 2010
User-defined motion gestures for mobile interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Speaking to see: a feasibility study of voice-assisted visual search
INTERACT'11 Proceedings of the 13th IFIP TC 13 international conference on Human-computer interaction - Volume Part I
Eliciting usable gestures for multi-display environments
Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces
From small screens to big displays: understanding interaction in multi-display environments
Proceedings of the companion publication of the 2013 international conference on Intelligent user interfaces companion
Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing
LampTop: touch detection for a projector-camera system based on shape classification
Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces
Hi-index | 0.01 |
Are the object manipulation techniques traditionally used in head-mounted displays (HMDs) applicable to augmented reality based projection systems? This paper examines the differences between HMD- and projector/camera-based AR interfaces in the light of a manipulation task involving documents and applications projected on common office surfaces such as tables, walls, cabinets, and floor. We report a Wizard of Oz study where subjects were first asked to create gesture/voice commands to move 2D objects on those surfaces and then exposed to gestures created by the authors. Among the options, subjects could select the object to be manipulated using voice command; touching, pointing, and grabbing gesture; or a virtual mouse. The results show a strong preference for a manipulation interface based on pointing gestures using small hand movements and involving minimal body movement. Direct touching of the object was also common when the object being manipulated was within the subjects' arm reach. Based on these results, we expect that the preferred interface resembles, in many ways, the egocentric model traditionally used in AR.