The “prince” technique: Fitts' law and selection using area cursors
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
DiamondTouch: a multi-user touch technology
Proceedings of the 14th annual ACM symposium on User interface software and technology
Interacting at a distance: measuring the performance of laser pointers and other devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
Proceedings of the 5th international conference on Multimodal interfaces
Rapidly prototyping Single Display Groupware through the SDGToolkit
AUIC '04 Proceedings of the fifth conference on Australasian user interface - Volume 28
Semantic pointing: improving target acquisition with control-display ratio adaptation
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The bubble cursor: enhancing target acquisition by dynamic resizing of the cursor's activation area
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The vacuum: facilitating the manipulation of distant objects
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Distant freehand pointing and clicking on very large, high resolution displays
Proceedings of the 18th annual ACM symposium on User interface software and technology
Fitts' law and expanding targets: Experimental studies and designs for user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
How pairs interact over a multimodal digital table
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multimodal interactive maps: designing for human performance
Human-Computer Interaction
TractorBeam selection aids: improving target acquisition for pointing input on tabletop displays
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
A comparison of ray pointing techniques for very large displays
Proceedings of Graphics Interface 2010
HIPerPaper: introducing pen and paper interfaces for ultra-scale wall displays
UIST '10 Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology
Exploring pen and paper interaction with high-resolution wall displays
UIST '10 Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology
Gesture select:: acquiring remote targets on large displays without pointing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Enabling multi-user interaction in large high-resolution distributed environments
Future Generation Computer Systems
Access overlays: improving non-visual access to large touch screens for blind users
Proceedings of the 24th annual ACM symposium on User interface software and technology
Proceedings of the 14th ACM international conference on Multimodal interaction
Analysis and comparison of target assistance techniques for relative ray-cast pointing
International Journal of Human-Computer Studies
Making 3D content accessible for teachers
Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces
Hi-index | 0.00 |
The rapid development of large interactive wall displays has been accompanied by research on methods that allow people to interact with the display at a distance. The basic method for target acquisition is by ray casting a cursor from one's pointing finger or hand position; the problem is that selection is slow and error-prone with small targets. A better method is the bubble cursor that resizes the cursor's activation area to effectively enlarge the target size. The catch is that this technique's effectiveness depends on the proximity of surrounding targets: while beneficial in sparse spaces, it is less so when targets are densely packed together. Our method is the speech-filtered bubble ray that uses speech to transform a dense target space into a sparse one. Our strategy builds on what people already do: people pointing to distant objects in a physical workspace typically disambiguate their choice through speech. For example, a person could point to a stack of books and say "the green one". Gesture indicates the approximate location for the search, and speech 'filters' unrelated books from the search. Our technique works the same way; a person specifies a property of the desired object, and only the location of objects matching that property trigger the bubble size. In a controlled evaluation, people were faster and preferred using the speech-filtered bubble ray over the standard bubble ray and ray casting approach.