Active agent oriented multimodal interface system

  • Authors:
  • Osamu Hasegawa;Katsunobu Itou;Takio Kurita;Satoru Hayamizu;Kazuyo Tanaka;Kazuhiko Yamamoto;Nobuyuki Otsu

  • Affiliations:
  • Electrotechnical Laboratory, Tsukuba, Ibaraki, Japan;Electrotechnical Laboratory, Tsukuba, Ibaraki, Japan;Electrotechnical Laboratory, Tsukuba, Ibaraki, Japan;Electrotechnical Laboratory, Tsukuba, Ibaraki, Japan;Electrotechnical Laboratory, Tsukuba, Ibaraki, Japan;Electrotechnical Laboratory, Tsukuba, Ibaraki, Japan;Electrotechnical Laboratory, Tsukuba, Ibaraki, Japan

  • Venue:
  • IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
  • Year:
  • 1995

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a prototype of an interface system with an active human-like agent. In usual human communication, non-verbal expressions play important roles. They convey emotional information and control timing of interaction as well. This project attempts to introduce multi modality into computer-human interaction. Our human-like agent with its realistic facial expressions identifies the user by sight and interacts actively and individually to each user in spoken language. That is, the agent sees human and visually recognizes who is the person, keeps eye-contacts in its facial display with human, starts spoken language interaction by talking to human first.