Charade: remote control of objects using free-hand gestures
Communications of the ACM - Special issue on computer augmented environments: back to the real world
Pointing on a computer display
CHI '95 Conference Companion on Human Factors in Computing Systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Readings in information visualization
Ten myths of multimodal interaction
Communications of the ACM
Detection and Estimation of Pointing Gestures in Dense Disparity Maps
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
A Map-Based System Using Speech and 3D Gestures for Pervasive Computing
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Pointing gesture recognition based on 3D-tracking of face, hands and head orientation
Proceedings of the 5th international conference on Multimodal interfaces
GWindows: robust stereo vision for gesture-based control of windows
Proceedings of the 5th international conference on Multimodal interfaces
Experimental evaluation of vision and speech based multimodal interfaces
Proceedings of the 2001 workshop on Perceptive user interfaces
A multimodal presentation planner for a home entertainment environment
Proceedings of the 2001 workshop on Perceptive user interfaces
When do we interact multimodally?: cognitive load and multimodal communication patterns
Proceedings of the 6th international conference on Multimodal interfaces
QuickFusion: multimodal fusion without time thresholds
MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
A novel method for multi-sensory data fusion in multimodal human computer interaction
OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
From a wizard of Oz experiment to a real time speech and gesture multimodal interface
Signal Processing - Special section: Multimodal human-computer interfaces
An input-parsing algorithm supporting integration of deictic gesture in natural language interface
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
An FPGA-based smart camera for gesture recognition in HCI applications
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part I
Hi-index | 0.00 |
Operators of traffic control rooms are often required to quickly respond to critical incidents using a complex array of multiple keyboards, mice, very large screen monitors and other peripheral equipment. To support the aim of finding more natural interfaces for this challenging application, this paper presents PEMMI (Perceptually Effective Multimodal Interface), a transport management system control prototype taking video-based manual gesture and speech recognition as inputs. A specific theme within this research is determining the optimum strategy for gesture input in terms of both single-point input selection and suitable multimodal feedback for selection. It has been found that users tend to prefer larger selection areas for targets in gesture interfaces, and tend to select within 44% of this selection radius. The minimum effective size for targets when using 'device-free' gesture interfaces was found to be 80 pixels (on a 1280x1024 screen). This paper also shows that feedback on gesture input via large screens is enhanced by the use of both audio and visual cues to guide the user's multimodal input. Audio feedback in particular was found to improve user response time by an average of 20% over existing gesture selection strategies for multimodal tasks.