A gaze-responsive self-disclosing display
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
What's in the eyes for attentive input
Communications of the ACM
Where to look: a study of human-robot engagement
Proceedings of the 9th international conference on Intelligent user interfaces
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
Life-Like Characters: Tools, Affective Functions, and Applications (Cognitive Technologies)
BT Technology Journal
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
International Journal of Intelligent Systems - Uncertain Reasoning (Part 1)
Towards a model of face-to-face grounding
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Intelligent Interactive Entertainment Grand Challenges
IEEE Intelligent Systems
Interest estimation based on dynamic bayesian networks for visual attentive presentation agents
Proceedings of the 9th international conference on Multimodal interfaces
MPML3D: a reactive framework for the multimodal presentation markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
In this paper, we report on an interactive system and the results ofa formal user study that was carried out with the aim of comparing two approaches to estimating users' interest in a multimodal presentation based on their eye gaze. The scenario consists of a virtual showroom where two 3D agents present product items in an entertaining way, and adapt their performance according to users' (in)attentiveness. In order to infer users' attention and visual interest with regard to interface objects, our system analyzes eye movements in real-time. Interest detection algorithms used in previous research determine an object of interest based on the time that eye gaze dwells on that object. However, this kind of algorithm does not seem to be well suited for dynamic presentations where the goal is to assess the user's focus of attention with regard to a dynamically changing presentation. Here, the current context of the object of interest has to be considered, i.e., whether the visual object is part of (or contributes to) the current presentation content or not. Therefore, we propose to estimate the interest (or non-interest) of a user by means of dynamic Bayesian networks that may take into account the current context of the attention receiving object. In this way, the presentation agents can provide timely and appropriate response. The benefits of our approach will be demonstrated both theoretically and empirically.