Embodied conversational agent based on semantic web

  • Authors:
  • Mikako Kimura;Yasuhiko Kitamura

  • Affiliations:
  • Department of Informatics, Kwansei Gakuin University, Sanda, Hyogo, Japan;Department of Informatics, Kwansei Gakuin University, Sanda, Hyogo, Japan

  • Venue:
  • PRIMA'06 Proceedings of the 9th Pacific Rim international conference on Agent Computing and Multi-Agent Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Embodied conversational agents (ECA’s) are cartoon-like characters which interact with users through conversation and gestures on a computer screen. ECA makes human computer interactions more friendly because we can use most human-like communication skills such as natural conversation. ECA’s are useful as Web guides by incorporating them into Web browsers. They guide us around Web pages chatting with us. To build such an agent, we need to describe a scenario to explain Web pages. Conventionally such scenarios are written manually by developers or programmers using a dialogue description language such as AIML (Artificial Intelligence Markup Language), so it is difficult to update them when Web pages are updated. In this paper, we propose a scheme to automatically generate utterances of Web guide agents depending on Web pages. To this end, we need to make agents understand the contents of Web pages and to make them talk according to the contents, so we utilize RDF (Resource Description Framework) to present the semantic contents of Web pages. To make agents talk according to the contents, we utilize a RDF query language SPARQL (Simple Protocol And RDF Query Language) and extend the AIML language to incorporate SPARQL query in it. As a prototype, we developed a Web guide system employing an ECA.