Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Evaluating look-to-talk: a gaze-aware interface in a collaborative environment
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Identifying the addressee in human-human-robot interactions based on head pose and speech
Proceedings of the 6th international conference on Multimodal interfaces
Incremental dialogue processing in a micro-domain
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Data mining to support human-machine dialogue for autonomous agents
ADMI'11 Proceedings of the 7th international conference on Agents and Data Mining Interaction
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
This paper presents a simple, yet effective model for managing attention and interaction control in multimodal spoken dialogue systems. The model allows the user to switch attention between the system and other humans, and the system to stop and resume speaking. An evaluation in a tutoring setting shows that the user's attention can be effectively monitored using head pose tracking, and that this is a more reliable method than using push-to-talk.