A multi-touch three dimensional touch-sensitive tablet
CHI '85 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The drawing prism: a versatile graphic input device
SIGGRAPH '85 Proceedings of the 12th annual conference on Computer graphics and interactive techniques
Design of Man-Computer Dialogues
Design of Man-Computer Dialogues
Two-handed gesture in multi-modal natural dialog
UIST '92 Proceedings of the 5th annual ACM symposium on User interface software and technology
A design space for multimodal systems: concurrent processing and data fusion
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Prototyping an intelligent agent through Wizard of Oz
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Passive real-world interface props for neurosurgical visualization
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A survey of design issues in spatial input
UIST '94 Proceedings of the 7th annual ACM symposium on User interface software and technology
Evaluation of the CyberGlove as a whole-hand input device
ACM Transactions on Computer-Human Interaction (TOCHI)
The VIEP system: interacting with collaborative multimedia
Proceedings of the 9th annual ACM symposium on User interface software and technology
Two-handed direct manipulation on the responsive workbench
Proceedings of the 1997 symposium on Interactive 3D graphics
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Multi-modal HCI: combination of gesture and speech recognition
CHI '93 INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems
Two-handed virtual manipulation
ACM Transactions on Computer-Human Interaction (TOCHI)
Supporting creative work tasks: the potential of multimodal tools to support sketching
C&C '99 Proceedings of the 3rd conference on Creativity & cognition
In Our Image: Interface Design in the 1990s
IEEE MultiMedia
Proceedings of the fifth international ACM conference on Assistive technologies
An Experimental Study of Input Modes for Multimodal Human-Computer Interaction
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
The human-computer interaction handbook
Children's and adults' multimodal interaction with 2D conversational agents
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Evaluating tangible objects for multimodal interaction design
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
A longitudinal evaluation of hands-free speech-based navigation during dictation
International Journal of Human-Computer Studies
The catchment feature model: a device for multimodal fusion and a bridge between signal and sense
EURASIP Journal on Applied Signal Processing
Affective multimodal mirror: sensing and eliciting laughter
Proceedings of the international workshop on Human-centered multimedia
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
Explorative studies on multimodal interaction in a PDA- and desktop-based scenario
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
A Wizard of Oz study for an AR multimodal interface
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Hands-free, speech-based navigation during dictation: difficulties, consequences, and solutions
Human-Computer Interaction
Robust understanding in multimodal interfaces
Computational Linguistics
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
Adding speech recognition support to UML tools
Journal of Visual Languages and Computing
Tangible User Interfaces: Past, Present, and Future Directions
Foundations and Trends in Human-Computer Interaction
Knowledge-guided inference for voice-enabled CAD
Computer-Aided Design
Integrating semantics into multimodal interaction patterns
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
Gaze-X: adaptive, affective, multimodal interface for single-user office scenarios
ICMI'06/IJCAI'07 Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing
MozArt: a multimodal interface for conceptual 3D modeling
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Evaluation of gesture based interfaces for medical volume visualization tasks
Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry
A comparison between spoken queries and menu-based interfaces for in-car digital music selection
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
An evaluation of an augmented reality multimodal interface using speech and paddle gestures
ICAT'06 Proceedings of the 16th international conference on Advances in Artificial Reality and Tele-Existence
To move or to remove?: a human-centric approach to understanding gesture interpretation
Proceedings of the Designing Interactive Systems Conference
PixelTone: a multimodal interface for image editing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Capacitive sensor-based hand gesture recognition in ambient intelligence scenarios
Proceedings of the 6th International Conference on PErvasive Technologies Related to Assistive Environments
Context-based hand gesture recognition for the operating room
Pattern Recognition Letters
Hi-index | 0.00 |
An experiment was conducted with people using gestures and speech to manipulate graphic images on a computer screen. A human was substituted for the recognition devices. The analysis showed that people strongly prefer to use both gestures and speech for the graphics manipulation and that they intuitively use multiple hands and multiple fingers in all three dimensions. There was surprising uniformity and simplicity in the gestures and speech. The analysis of these results provides strong encouragement for future development of integrated multi-modal interaction systems.