Expressive facial speech synthesis on a robotic platform

  • Authors:
  • Xingyan Li;Bruce MacDonald;Catherine I. Watson

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Auckland, New Zealand;Department of Electrical and Computer Engineering, University of Auckland, New Zealand;Department of Electrical and Computer Engineering, University of Auckland, New Zealand

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents our expressive facial speech synthesis system Eface, for a social or service robot. Eface aims at enabling a robot to deliver information clearly with empathetic speech and an expressive virtual face. The empathetic speech is built on the Festival speech synthesis system and provides robots the capability to speak with different voices and emotions. Two versions of a virtual face have been implemented to display the robot's expressions. One with just over 100 polygons has a lower hardware requirement but looks less natural. The other has over 1000 polygons; it looks realistic, but costs more CPU resource and requires better video hardware. The whole system is incorporated into the popular open source robot interface Player, which makes client programs easy to write and debug. Also, it is convenient to use the same system with different robot platforms. We have implemented this system on a physical robot and tested it with a robotic nurse assistant scenario.