The impact of eye gaze on communication using humanoid avatars
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections
ACM Transactions on Interactive Intelligent Systems (TiiS)
Designing effective gaze mechanisms for virtual agents
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A head-eye coordination model for animating gaze shifts of virtual characters
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Hi-index | 0.00 |
Embodied social agents, through their ability to afford embodied interaction using nonverbal human communicative cues, hold great promise in application areas such as education, training, rehabilitation, and collaborative work. Gaze cues are particularly important for achieving significant social and communicative goals. In this research, I explore how agents - both virtual agents and humanlike robots - might achieve such goals through the use of various gaze mechanisms. To this end, I am developing computational control models of gaze behavior that treat gaze as the output of a system with a number of multimodal inputs. These inputs can be characterized at different levels of interaction, from non-interactive (e.g., physical characteristics of the agent itself) to fully interactive (e.g., speech and gaze behavior of a human interlocutor). This research will result in a number of control models that each focus on a different gaze mechanism, combined into an open-source library of gaze behaviors that will be usable by both human-robot and human-virtual agent interaction designers. System-level evaluations in naturalistic settings will validate this gaze library for its ability to evoke positive social and cognitive responses in human users.