Smart cheaters do prosper: defeating trust and reputation systems

  • Authors:
  • Reid Kerr;Robin Cohen

  • Affiliations:
  • University of Waterloo, Waterloo, Ontario, Canada;University of Waterloo, Waterloo, Ontario, Canada

  • Venue:
  • Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traders in electronic marketplaces may behave dishonestly, cheating other agents. A multitude of trust and reputation systems have been proposed to try to cope with the problem of cheating. These systems are often evaluated by measuring their performance against simple agents that cheat randomly. Unfortunately, these systems are not often evaluated from the perspective of security---can a motivated attacker defeat the protection? Previously, it was argued that existing systems may suffer from vulnerabilities that permit effective, profitable cheating despite the use of the system. In this work, we experimentally substantiate the presence of these vulnerabilities by successfully implementing and testing a number of such 'attacks', which consist only of sequences of sales (honest and dishonest) that can be executed in the system. This investigation also reveals two new, previously-unnoted cheating techniques. Our success in executing these attacks compellingly makes a key point: security must be a central design goal for developers of trust and reputation systems.