An experimental study of future “natural” multimodal human-computer interaction
CHI '93 INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems
CHI 98 Cconference Summary on Human Factors in Computing Systems
Expression constraints in multimodal human-computer interaction
Proceedings of the 5th international conference on Intelligent user interfaces
Optimization criteria for checkpoint placement
Communications of the ACM
Participatory Design: Principles and Practices
Participatory Design: Principles and Practices
Toward universal mobile interaction for shared displays
CSCW '04 Proceedings of the 2004 ACM conference on Computer supported cooperative work
Maximizing the guessability of symbolic input
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Cooperative gestures: multi-user gestural interactions for co-located groupware
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A study of hand shape use in tabletop gesture interaction
CHI '06 Extended Abstracts on Human Factors in Computing Systems
CoSearch: a system for co-located collaborative web search
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Algorithmic mediation for collaborative exploratory search
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
User-defined gestures for surface computing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
WeSearch: supporting collaborative search and sensemaking on a tabletop display
Proceedings of the 2010 ACM conference on Computer supported cooperative work
Analysis of natural gestures for controlling robot teams on multi-touch tabletop surfaces
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Investigating multi-touch and pen gestures for diagram editing on interactive surfaces
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
WebSurface: an interface for co-located collaborative information gathering
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Collaborative Search: Who, What, Where, When, Why, and How
Collaborative Search: Who, What, Where, When, Why, and How
Understanding users' preferences for surface gestures
Proceedings of Graphics Interface 2010
Search on surfaces: Exploring the potential of interactive tabletops for collaborative search tasks
Information Processing and Management: an International Journal
User-defined motion gestures for mobile interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Typing on flat glass: examining ten-finger expert typing patterns on touch surfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
"It's simply integral to what I do": enquiries into how the web is weaved into everyday life
Proceedings of the 21st international conference on World Wide Web
Putting your best foot forward: investigating real-world mappings for foot-based gestures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A Wizard-of-Oz elicitation study examining child-defined gestures with a whole-body interface
Proceedings of the 12th International Conference on Interaction Design and Children
The cube: a very large-scale interactive engagement space
Proceedings of the 2013 ACM international conference on Interactive tabletops and surfaces
SpeeG2: a speech- and gesture-based interface for efficient controller-free text input
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
New sensing technologies like Microsoft's Kinect provide a low-cost way to add interactivity to large display surfaces, such as TVs. In this paper, we interview 25 participants to learn about scenarios in which they would like to use a web browser on their living room TV. We then conduct an interaction-elicitation study in which users suggested speech and gesture interactions for fifteen common web browser functions. We present the most popular suggested interactions, and supplement these findings with observational analyses of common gesture and speech conventions adopted by our participants. We also reflect on the design of multimodal, multi-user interaction-elicitation studies, and introduce new metrics for interpreting user-elicitation study findings.