Evaluation and Design of Online Cooperative Feedback Mechanisms for Reputation Management
IEEE Transactions on Knowledge and Data Engineering
Sybilproof reputation mechanisms
Proceedings of the 2005 ACM SIGCOMM workshop on Economics of peer-to-peer systems
Journal of the American Society for Information Science and Technology
Analysis of robustness in trust-based recommender systems
RIAO '10 Adaptivity, Personalization and Fusion of Heterogeneous Information
The agent reputation and trust (ART) testbed
iTrust'06 Proceedings of the 4th international conference on Trust Management
A Model for a Testbed for Evaluating Reputation Systems
TRUSTCOM '11 Proceedings of the 2011IEEE 10th International Conference on Trust, Security and Privacy in Computing and Communications
Hi-index | 0.00 |
Reputation is a primary mechanism for trust management in decentralized systems. Many reputation-based trust functions have been proposed in the literature. However, picking the right trust function for a given decentralized system is a non-trivial task. One has to consider and balance a variety of factors, including computation and communication costs, scalability and resilience to manipulations by attackers. Although the former two are relatively easy to evaluate, the evaluation of resilience of trust functions is challenging. Most existing work bases evaluation on static attack models, which is unrealistic as it fails to reflect the adaptive nature of adversaries (who are often real human users rather than simple computing agents). In this paper, we highlight the importance of the modeling of adaptive attackers when evaluating reputation-based trust functions, and propose an adaptive framework - called COMPARS - for the evaluation of resilience of reputation systems. Given the complexity of reputation systems, it is often difficult, if not impossible, to exactly derive the optimal strategy of an attacker. Therefore, COMPARS takes a practical approach that attempts to capture the reasoning process of an attacker as it decides its next action in a reputation system. Specifically, given a trust function and an attack goal, COMPARS generates an attack tree to estimate the possible outcomes of an attacker's action sequences up to certain points in the future. Through attack trees, COMPARS simulates the optimal attack strategy for a specific reputation function f, which will be used to evaluate the resilience of f. By doing so, COMPARS allows one to conduct a fair and consistent comparison of different reputation functions.