Modelling grounding and discourse obligations using update rules
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards a model of face-to-face grounding
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Estimating User's Conversational Engagement Based on Gaze Behaviors
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
Thanks to the progress of computer vision technologies and human sensing technologies, human behaviors, such as gaze and head poses, can be accurately measured in real time. Previous studies in multimodal user interfaces and intelligent virtual agents presented many interesting applications by exploiting such sensing technologies [1, 2]. However, little has been studied how to extract communication signals from a huge amount of data, and how to use such data in dialogue management in conversational agents.