A gaze-responsive self-disclosing display
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The use of eye movements in human-computer interaction techniques: what you look at is what you get
ACM Transactions on Information Systems (TOIS) - Special issue on computer—human interaction
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
What's in the eyes for attentive input
Communications of the ACM
Where to look: a study of human-robot engagement
Proceedings of the 9th international conference on Intelligent user interfaces
BT Technology Journal
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards a model of face-to-face grounding
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Describing and generating multimodal contents featuring affective lifelike agents with MPML
New Generation Computing
Gaze-based infotainment agents
Proceedings of the international conference on Advances in computer entertainment technology
Highly Realistic 3D Presentation Agents with Visual Attention Capability
SG '07 Proceedings of the 8th international symposium on Smart Graphics
MPML3D: a reactive framework for the multimodal presentation markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
AutoSelect: What You Want Is What You Get: Real-Time Processing of Visual Attention and Affect
PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
Estimating User's Conversational Engagement Based on Gaze Behaviors
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Autonomous Turn-Taking Agent System Based on Behavior Model
Proceedings of the 13th International Conference on Human-Computer Interaction. Part III: Ubiquitous and Intelligent Interaction
Enhancements to Online Help: Adaptivity and Embodied Conversational Agents
UAHCI '09 Proceedings of the 5th International on ConferenceUniversal Access in Human-Computer Interaction. Part II: Intelligent and Ubiquitous Interaction Environments
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Communicating with multiple users for embodied conversational agents in quiz game context
International Journal of Intelligent Information and Database Systems
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction
Interacting with a gaze-aware virtual character
Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction
Attentive user interface for interaction within virtual reality environments based on gaze analysis
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: interaction techniques and environments - Volume Part II
International Journal of Human-Computer Studies
Estimating conversational dominance in multiparty interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Gaze awareness in conversational agents: Estimating a user's conversational engagement from eye gaze
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on interaction with smart objects, Special section on eye gaze and conversation
Hi-index | 0.00 |
The paper describes an infotainment application where life-like characters present two MP3 players in a virtual showroom. The key feature of the system is that the presenter agents analyze the user's gaze-behavior in real-time and may thus adapt the presentation flow accordingly. In particular, a user's (non-)interest in interface objects and also preference in decision situations is estimated automatically by just using eye gaze as input modality. A formal study was conducted that compared two versions of the application. Results indicate that attentive presentation agents support successful grounding of deictic agent gestures and natural gaze behavior.