Goal generation with relevant and trusted beliefs

  • Authors:
  • Célia da Costa Pereira;Andrea G. B. Tettamanzi

  • Affiliations:
  • Università degli Studi di Milano, Crema (CR), Italy;Università degli Studi di Milano, Crema (CR), Italy

  • Venue:
  • Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A rational agent adopts (or changes) its goals when new information (beliefs) becomes available or its desires (e.g., tasks it is supposed to carry out) change. In conventional approaches to goal generation in which a goal is considered as a "particular" desire, a goal is adopted if and only if all conditions leading to its generation are satisfied. It is then supposed that all beliefs are equally relevant and their sources completely trusted. However, that is not a realistic setting. In fact, depending on the agent's trust in the source of a piece of information, an agent may decide how strongly it takes into consideration such piece of information in goal generation. On the other hand, not all beliefs are equally relevant to the adoption of a given goal, and a given belief may not be equally relevant to the adoption of different goals. We propose an approach which takes into account both the relevance of beliefs and the trust degree of the source from which the corresponding piece of information comes, in desire/goal generation. Two algorithms for updating the mental state of an agent in this new setting and three ways for comparing the resulting fuzzy set of desires have been given. Finally, two fundamental postulates any rational goal election function should obey have been stated.