Generating Referring Expressions in a Multimodal Environment
Proceedings of the 6th International Workshop on Natural Language Generation: Aspects of Automated Natural Language Generation
The Performance of an Incremental Generation Component for Multi-Modal Dialog Contributions
Proceedings of the 6th International Workshop on Natural Language Generation: Aspects of Automated Natural Language Generation
Avocado: A Distributed Virtual Reality Framework
VR '99 Proceedings of the IEEE Virtual Reality
Cooking up referring expressions
ACL '89 Proceedings of the 27th annual meeting on Association for Computational Linguistics
Resolving Object References in Multimodal Dialogues for Immersive Virtual Environments
VR '04 Proceedings of the IEEE Virtual Reality 2004
Synthesizing multimodal utterances for conversational agents: Research Articles
Computer Animation and Virtual Worlds
Gestures to Intuitively Control Large Displays
Gesture-Based Human-Computer Interaction and Simulation
The recognition and comprehension of hand gestures: a review and research agenda
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
Focusing computational visual attention in multi-modal human-robot interaction
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Deictic gestures with a time-of-flight camera
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
International Journal of Human-Computer Studies
REX-J: Japanese referring expression corpus of situated dialogs
Language Resources and Evaluation
Hi-index | 0.00 |
We present a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we attempt to measure the precision of the focussed area of a pointing gesture, the so-called pointing cone. The pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.