The vocabulary problem in human-system communication
Communications of the ACM
Charade: remote control of objects using free-hand gestures
Communications of the ACM - Special issue on computer augmented environments: back to the real world
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Pointing on a computer display
CHI '95 Conference Companion on Human Factors in Computing Systems
A comparison of three selection techniques for touchpads
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Cooperating with people: the intelligent classroom
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
The integrality of speech in multimodal interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Testing pointing device performance and user assessment with the ISO 9241, Part 9 standard
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on human-computer interaction with mobile systems
Multimodal error correction for speech user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Perceptive user interfaces workshop
Proceedings of the 2001 workshop on Perceptive user interfaces
Designing a human-centered, multimodal GIS interface to support emergency management
Proceedings of the 10th ACM international symposium on Advances in geographic information systems
A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Automatic acquisition and initialization of articulated models
Machine Vision and Applications - Special issue: Human modeling, analysis, and synthesis
A study of manual gesture-based selection for the PEMMI multimodal transport management interface
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Evaluating tangible objects for multimodal interaction design
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
Exploratory study of lexical patterns in multimodal cues
MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
Evaluation of contactless multimodal pointing devices
IASTED-HCI '07 Proceedings of the Second IASTED International Conference on Human Computer Interaction
Multimodal interfaces: Challenges and perspectives
Journal of Ambient Intelligence and Smart Environments
Wizard of oz for multimodal interfaces design: deployment considerations
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: interaction design and usability
Information Processing and Management: an International Journal
Gesture-based user interfaces for public spaces
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: users diversity - Volume Part II
Multimodal human computer interaction: a survey
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Multimodal interfaces: Challenges and perspectives
Journal of Ambient Intelligence and Smart Environments
Design principles of hand gesture interfaces for microinteractions
Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces
Hi-index | 0.00 |
Progress in computer vision and speech recognition technologies has recently enabled multimodal interfaces that use speech and gestures. These technologies o er promising alternatives to existing interfaces because they emulate the natural way in which humans communicate. However, no systematic work has been reported that formally evaluates the new speech/gesture interfaces. This paper is concerned with formal experimental evaluation of new human-computer interactions enabled by speech and hand gestures.The paper describes an experiment conducted with 23 subjects that evaluates selection strategies for interaction with large screen displays. The multimodal interface designed for this experiment does not require the user to be in physical contact with any device. Video cameras and long range microphones are used as input for the system. Three selection strategies are evaluated and results for Different target sizes and positions are reported in terms of accuracy, selection times and user preference. Design implications for vision/speech based interfaces are inferred from these results. This study also raises new question and topics for future research.