Understanding "not-understood": towards an ontology of error conditions for agent communication

  • Authors:
  • Anita Petrinjak;Renée Elio

  • Affiliations:
  • Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada;Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada

  • Venue:
  • AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents the notion of an agent interaction model, from which error conditions for agent communication can be defined-- cases in which an agent generates a not-understood message. Such a model specifies task and agent interdependencies, agent roles, and predicate properties at a domain-independent level of abstraction. It also defines which agent beliefs may be updated, revised, or accessed through a communication act from another agent in a particular role. An agent generates a not-understood message when it fails to explain elements of a received message in terms of this underlying interaction model. The reason included as content for the not-understood message is the specific model violation. As such, not-understood messages constitute a kind of 'run-time error' that signals mismatches between agents' respective belief states, in terms of the general interaction model that defines legal and pragmatic communication actions. The interaction model can also set policies for belief revision as a response to a not-understood message, which may be necessary when task allocation or coordination relationships change during run time.