Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior
Proceedings of the 2nd ACM conference on Electronic commerce
Web-Based Reputation Management Systems: Problems and Suggested Solutions
Electronic Commerce Research
Designing and Evaluating E-Business Models
IEEE Intelligent Systems
Normative structures in trust management
iTrust'06 Proceedings of the 4th international conference on Trust Management
Trust enforcement in peer-to-peer massive multi-player online games
ODBASE'06/OTM'06 Proceedings of the 2006 Confederated international conference on On the Move to Meaningful Internet Systems: CoopIS, DOA, GADA, and ODBASE - Volume Part II
iTrust'05 Proceedings of the Third international conference on Trust Management
Fairness Emergence through Simple Reputation
TrustBus '08 Proceedings of the 5th international conference on Trust, Privacy and Security in Digital Business
Trust models and applications in communication and multi-agent systems
International Journal of Knowledge-based and Intelligent Engineering Systems - Selected papers of KES2012-Part 2 of 2
Hi-index | 0.00 |
All trust management systems must take into account the possibility of error: of misplaced trust. Therefore, regardless of whether it uses reputation or not, is centralized or distributed, a trust management system must be evaluated with consideration for the consequences of misplaced or abused trust. Thus, the issue of fairness has always been implicitly considered in the design and evaluation of trust management systems. This paper attempts to show that an implicit consideration, using the utilitarian paradigm of maximizing the sum of agents' utilities, is insufficient. Two case studies presented in the paper concern the design of a new reputation systems that uses implicit and emphasized negative feedbacks, and the evaluation of reputation systems' robustness to discrimination. The case studies demonstrate that considering fairness explicitly leads to different trust management system design and evaluation. Trust management systems can realize a goal of system fairness, identified with distributional fairness of agents' utilities. The realization of this goal can be achieved in a laboratory setting when all other factors that affect utilities can be excluded, and where the system can be tested using modeled adversaries. Taking the fairness of agent behavior explicitly into account when building trust or distrust can help to realize the goal of fairness of trust management systems.