Engineering trust alignment: Theory, method and experimentation

  • Authors:
  • Andrew Koster;Marco Schorlemmer;Jordi Sabater-Mir

  • Affiliations:
  • Artificial Intelligence Research Institute, CSIC, Bellaterra, Barcelona, Spain and Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain;Artificial Intelligence Research Institute, CSIC, Bellaterra, Barcelona, Spain and Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain;Artificial Intelligence Research Institute, CSIC, Bellaterra, Barcelona, Spain

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In open multi-agent systems trust models are an important tool for agents to achieve effective interactions. However, in these kinds of open systems, the agents do not necessarily use the same, or even similar, trust models, leading to semantic differences between trust evaluations in the different agents. Hence, to successfully use communicated trust evaluations, the agents need to align their trust models. We explicate that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems and propose a novel approach. We show how the trust alignment can be formed by considering the interactions that agents share and describe a mathematical framework to formulate precisely how the interactions support trust evaluations for both agents. We show how this framework can be used in the alignment process and explain how an alignment should be learned. Finally, we demonstrate this alignment process in practice, using a first-order regression algorithm, to learn an alignment and test it in an example scenario.