Model-based talking face synthesis for anthropomorphic spoken dialog agent system

  • Authors:
  • Tatsuo Yotsukura;Shigeo Morishima;Satoshi Nakamura

  • Affiliations:
  • ATR Spoken Language Translation Research Laboratories, "Keihanna Science City", Kyoto, Japan;Seikei University, Musashinoshi, Tokyo, Japan;ATR Spoken Language Translation Research Laboratories, "Keihanna Science City", Kyoto, Japan

  • Venue:
  • MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Towards natural human-machine communication, interface technologies by way of speech and image information have been intensively developed. An anthropomorphic dialog agent is an ideal system, which integrates spoken dialog and natural facial expressions. This paper reports on our project aiming to create a general-purpose toolkit for building an easily customizable anthropomorphic agent. There have been almost no tools so far such as intuitive, easy to understand, fully interactive, and open source. Our anthropomorphic agent is designed to fulfill these requirements. This toolkit consists four modules, multi modal dialog integration, speech recognition, speech synthesis, and face image synthesis. These modules are highly modularized and interlinked by a simple communication protocols.In this paper, we focus on the construction of an agent's face image synthesis. For this part lip movement control synchronous to the speech signal and facial emotion expression are the most important parts. We developed the face image synthesis module (FSM) that only requires one frontal face image, and can be used by any skill level of users. A user's original agent can be generated by easy adjustment of the frontal face image and the generic wire-frame model. The paper describes overall system diagram and specifically the agent's face image synthesis part.