Communicative facial displays as a new conversational modality
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
The GAZE groupware system: mediating joint attention in multiparty communication and collaboration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Emotional meaning and expression in animated faces
Affective interactions
Gaze Awareness for Video-Conferencing: A Software Approach
IEEE MultiMedia
Shader Lamps: Animating Real Objects With Image-Based Illumination
Proceedings of the 12th Eurographics Workshop on Rendering Techniques
A model of gaze for the purpose of emotional expression in virtual embodied agents
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Animatronic Shader Lamps Avatars
ISMAR '09 Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality
Glances, glares, and glowering: how should a virtual human express emotion through gaze?
Autonomous Agents and Multi-Agent Systems
A study of a retro-projected robotic face and its effectiveness for gaze reading by humans
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
Facilitating multiparty dialog with gaze, gesture, and speech
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Visual attention and eye gaze during multiparty conversations with distractions
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
The Mona Lisa gaze effect as an objective metric for perceived cospatiality
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
IrisTK: a statechart-based toolkit for multi-party face-to-face interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Multimodal multiparty social interaction with the furhat head
Proceedings of the 14th ACM international conference on Multimodal interaction
Perception of gaze direction for situated interaction
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Lip-reading: furhat audio visual intelligibility of a back projected animated face
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Furhat: a back-projected human-like robot head for multiparty human-machine interaction
COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
Augmenting physical avatars using projector-based illumination
ACM Transactions on Graphics (TOG)
Controllable models of gaze behavior for virtual agents and humanlike robots
Proceedings of the 15th ACM on International conference on multimodal interaction
Spontaneous spoken dialogues with the furhat human-like robot head
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
The perception of gaze plays a crucial role in human-human interaction. Gaze has been shown to matter for a number of aspects of communication and dialogue, especially for managing the flow of the dialogue and participant attention, for deictic referencing, and for the communication of attitude. When developing embodied conversational agents (ECAs) and talking heads, modeling and delivering accurate gaze targets is crucial. Traditionally, systems communicating through talking heads have been displayed to the human conversant using 2D displays, such as flat monitors. This approach introduces severe limitations for an accurate communication of gaze since 2D displays are associated with several powerful effects and illusions, most importantly the Mona Lisa gaze effect, where the gaze of the projected head appears to follow the observer regardless of viewing angle. We describe the Mona Lisa gaze effect and its consequences in the interaction loop, and propose a new approach for displaying talking heads using a 3D projection surface (a physical model of a human head) as an alternative to the traditional flat surface projection. We investigate and compare the accuracy of the perception of gaze direction and the Mona Lisa gaze effect in 2D and 3D projection surfaces in a five subject gaze perception experiment. The experiment confirms that a 3D projection surface completely eliminates the Mona Lisa gaze effect and delivers very accurate gaze direction that is independent of the observer's viewing angle. Based on the data collected in this experiment, we rephrase the formulation of the Mona Lisa gaze effect. The data, when reinterpreted, confirms the predictions of the new model for both 2D and 3D projection surfaces. Finally, we discuss the requirements on different spatially interactive systems in terms of gaze direction, and propose new applications and experiments for interaction in a human-ECA and a human-robot settings made possible by this technology.