Synergistic use of direct manipulation and natural language
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The perceptual structure of multidimensional input device selection
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Virtual reality for palmtop computers
ACM Transactions on Information Systems (TOIS)
Situated information spaces and spatially aware palmtop computers
Communications of the ACM - Special issue on computer augmented environments: back to the real world
A survey of design issues in spatial input
UIST '94 Proceedings of the 7th annual ACM symposium on User interface software and technology
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Critical zones in desert fog: aids to multiscale navigation
Proceedings of the 11th annual ACM symposium on User interface software and technology
Two-handed virtual manipulation
ACM Transactions on Computer-Human Interaction (TOCHI)
Toolspaces and glances: storing, accessing, and retrieving objects in 3D desktop applications
I3D '99 Proceedings of the 1999 symposium on Interactive 3D graphics
HMDs, caves & chameleon: a human-centric analysis of interaction in virtual space
ACM SIGGRAPH Computer Graphics
Semantics of interactive rotations
I3D '86 Proceedings of the 1986 workshop on Interactive 3D graphics
The VideoMouse: a camera-based multi-degree-of-freedom input device
Proceedings of the 12th annual ACM symposium on User interface software and technology
Proceedings of the 15th annual ACM symposium on User interface software and technology
Peephole displays: pen interaction on spatially aware handheld computers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Making nested rotations convenient for the user
SIGGRAPH '78 Proceedings of the 5th annual conference on Computer graphics and interactive techniques
The Infocockpit: providing location and place to aid human memory
Proceedings of the 2001 workshop on Perceptive user interfaces
Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR
ISMAR '08 Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality
TimeTilt: Using Sensor-Based Gestures to Travel through Multiple Applications on a Mobile Device
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I
Virtual shelves: interactions with orientation aware devices
Proceedings of the 22nd annual ACM symposium on User interface software and technology
Investigating multi-touch and pen gestures for diagram editing on interactive surfaces
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Pacer: fine-grained interactive paper via camera-touch hybrid gestures on a cell phone
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Extending a mobile device's interaction space through body-centric interaction
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
m+pSpaces: virtual workspaces in the spatially-aware mobile environment
MobileHCI '12 Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services
GripSense: using built-in sensors to detect hand posture and pressure on commodity mobile phones
Proceedings of the 25th annual ACM symposium on User interface software and technology
Hi-index | 0.00 |
We contrast the Chameleon Lens, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.