A computational trust model with trustworthiness against liars in multiagent systems

  • Authors:
  • Manh Hung Nguyen;Dinh Que Tran

  • Affiliations:
  • Post and Telecommunication Institute of Technology (PTIT), Ha Noi, Vietnam, IRD, UMI 209 UMMISCO, Institut de la Francophonie pour l'Informatique (IFI), Vietnam;Post and Telecommunication Institute of Technology (PTIT), Ha Noi, Vietnam

  • Venue:
  • ICCCI'12 Proceedings of the 4th international conference on Computational Collective Intelligence: technologies and applications - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Trust is considered as the crucial factor for agents in decision making to select the partners during their interaction in open distributed multiagent systems. Most of current trust models are the combination of experience trust and reference trust and make use of some propagation mechanism to enable agents to share his/her final trust with partners. These models are based on the assumption that all agents are reliable when they share their trust with others. However, they are no more longer appropriate to applications of multiagent systems, in which several concurrent agents may not be ready to share their information or may share the wrong data by lying to their partners. In this paper, we introduce a computational model of trust that is a combination of experience trust and reference trust. Furthermore, our model offers a mechanism to enable agents to judge the trustworthiness of referees when they refer the reference trust from their various partners that may be liars.