On correctness and privacy in distributed mechanisms

  • Authors:
  • Felix Brandt;Tuomas Sandholm

  • Affiliations:
  • Carnegie Mellon University, Pittsburgh, PA;Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • AMEC'05 Proceedings of the 2005 international conference on Agent-Mediated Electronic Commerce: designing Trading Agents and Mechanisms
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Mechanisms that aggregate the possibly conflicting preferences of individual agents are studied extensively in economics, operations research, and lately computer science. Perhaps surprisingly, the classic literature assumes participating agents to act selfishly, possibly untruthfully, if it is to their advantage, whereas the mechanism center is usually assumed to be honest and trustworthy. We argue that cryptography offers various concepts and building blocks to ensure the secure, i.e., correct and private, execution of mechanisms. We propose models with and without a center that guarantee correctness and preserve the privacy of preferences relying on diverse assumptions such as the trustworthiness of the center or the hardness of computation. The decentralized model in which agents jointly “emulate” a virtual mechanism center is particularly interesting for two reasons. For one, it provides privacy without relying on a trusted third-party. Second, it enables the provably correct execution of randomized mechanisms (which is not the case in the centralized model). We furthermore point out how untruthful and multi-step mechanisms can improve privacy. In particular, we show that the fully private emulation of a preference elicitor can result in unconditional privacy of a (non-empty) subset of preferences.