Statecharts: A visual formalism for complex systems
Science of Computer Programming
Shader Lamps: Animating Real Objects With Image-Based Illumination
Proceedings of the 12th Eurographics Workshop on Rendering Techniques
Differences in effect of robot and screen agent recommendations on human decision-making
International Journal of Human-Computer Studies - Special issue: Subtle expressivity for characters and robots
Corneal Imaging System: Environment from Eyes
International Journal of Computer Vision
Animatronic Shader Lamps Avatars
ISMAR '09 Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality
Mechatronic design of a fast and long range 4 degrees of freedom humanoid neck
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
The Mona Lisa gaze effect as an objective metric for perceived cospatiality
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections
ACM Transactions on Interactive Intelligent Systems (TiiS)
Animated faces for robotic heads: gaze and beyond
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment
IrisTK: a statechart-based toolkit for multi-party face-to-face interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Multimodal multiparty social interaction with the furhat head
Proceedings of the 14th ACM international conference on Multimodal interaction
Perception of gaze direction for situated interaction
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Lip-reading: furhat audio visual intelligibility of a back projected animated face
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Situated multiparty interaction between humans and agents
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Spontaneous spoken dialogues with the furhat human-like robot head
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Human-robot collaborative tutoring using multiparty multimodal spoken dialogue
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and present Furhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and how Furhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application of Furhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.