Furhat: a back-projected human-like robot head for multiparty human-machine interaction

  • Authors:
  • Samer Al Moubayed;Jonas Beskow;Gabriel Skantze;Björn Granström

  • Affiliations:
  • Department of Speech, Music, and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden;Department of Speech, Music, and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden;Department of Speech, Music, and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden;Department of Speech, Music, and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden

  • Venue:
  • COST'11 Proceedings of the 2011 international conference on Cognitive Behavioural Systems
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this chapter, we first present a summary of findings from two previous studies on the limitations of using flat displays with embodied conversational agents (ECAs) in the contexts of face-to-face human-agent interaction. We then motivate the need for a three dimensional display of faces to guarantee accurate delivery of gaze and directional movements and present Furhat, a novel, simple, highly effective, and human-like back-projected robot head that utilizes computer animation to deliver facial movements, and is equipped with a pan-tilt neck. After presenting a detailed summary on why and how Furhat was built, we discuss the advantages of using optically projected animated agents for interaction. We discuss using such agents in terms of situatedness, environment, context awareness, and social, human-like face-to-face interaction with robots where subtle nonverbal and social facial signals can be communicated. At the end of the chapter, we present a recent application of Furhat as a multimodal multiparty interaction system that was presented at the London Science Museum as part of a robot festival,. We conclude the paper by discussing future developments, applications and opportunities of this technology.