VHML - Directing a Talking Head

  • Authors:
  • Andrew Marriott;Simon Beard;John Stallo;Quoc Huynh

  • Affiliations:
  • -;-;-;-

  • Venue:
  • AMT '01 Proceedings of the 6th International Computer Science Conference on Active Media Technology
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

The computer revolution in Active Media Technology has recently made it possible to have Talking Head interfaces to applications and information. Users may, with plain English queries, interact with a lifelike computer generated image that responds to them with computer generated speech using textual information coming from a knowledge base. This paper details the research being done at Curtin University in creating a Virtual Human Markup Language (VHML) that allows these interactive Talking Heads to be directed by text marked up in XML. This direction makes the interaction more effective. The language is designed to accommodate the various aspects of Human-Computer Interaction with regards to Facial Animation, Body Animation, Dialogue Manager interaction, Text to Speech production, Emotional Representation plus Hyper and Multi Media information. This paper also points to audio and visual examples of the use of the language as well as user evaluation of an interactive Talking Head that uses VHML. VHML is currently being used in several Talking Head applications as well as a Mentoring System. Finally we discuss planned future experiments using VHML for two Talking Head demonstrations / evaluations. The VHML development and implementation is part of a three-year European Union Fifth Framework project called InterFace.