The Case for Fairness of Trust Management

  • Authors:
  • Adam Wierzbicki

  • Affiliations:
  • Department of Computer Networks, Polish-Japanese Institute of Information Technology, Warsaw, Poland

  • Venue:
  • Electronic Notes in Theoretical Computer Science (ENTCS)
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

All trust management systems must take into account the possibility of error: of misplaced trust. Therefore, regardless of whether it uses reputation or not, is centralized or distributed, a trust management system must be evaluated with consideration for the consequences of misplaced or abused trust. Thus, the issue of fairness has always been implicitly considered in the design and evaluation of trust management systems. This paper attempts to show that an implicit consideration, using the utilitarian paradigm of maximizing the sum of agents' utilities, is insufficient. Two case studies presented in the paper concern the design of a new reputation systems that uses implicit and emphasized negative feedbacks, and the evaluation of reputation systems' robustness to discrimination. The case studies demonstrate that considering fairness explicitly leads to different trust management system design and evaluation. Trust management systems can realize a goal of system fairness, identified with distributional fairness of agents' utilities. The realization of this goal can be achieved in a laboratory setting when all other factors that affect utilities can be excluded, and where the system can be tested using modeled adversaries. Taking the fairness of agent behavior explicitly into account when building trust or distrust can help to realize the goal of fairness of trust management systems.