Intelligent multi-media interface technology
Intelligent user interfaces
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
EDWARD: full integration of language and action in a multimodal user interface
International Journal of Human-Computer Studies
Research in multimedia and multimodal parsing and generation
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: intelligent multimedia
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Interactive Graphics Design with Situated Agents
Graphics and Robotics
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Lifelike Gesture Synthesis and Timing for Conversational Agents
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Proceedings of the 6th international conference on Multimodal interfaces
A Qualitative and Quantitative Characterisation of Style in Sign Language Gestures
Gesture-Based Human-Computer Interaction and Simulation
The recognition and comprehension of hand gestures: a review and research agenda
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Led by the fundamental role that rhythms apparently play in speech and gestural communication among humans, this study was undertaken to substantiate a biologically motivated model for synchronizing speech and gesture input in human computer interaction. Our approach presents a novel method which conceptualizes a multimodal user interface on the basis of timed agent systems. We use multiple agents for the purpose of polling presemantic information from different sensory channels (speech and hand gestures) and integrating them to multimodal data structures that can be processed by an application system which is again based on agent systems. This article motivates and presents technical work which exploits rhythmic patterns in the development of biologically and cognitively motivated mediator systems between humans and machines.