Improving privacy in distributed constraint optimization

  • Authors:
  • Michael D. Smith;Rachel Greenstadt

  • Affiliations:
  • Harvard University;Harvard University

  • Venue:
  • Improving privacy in distributed constraint optimization
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-agent systems that work with people to accomplish tasks require access to information that their users consider private. Mechanisms that protect this private information from the other participants and accurate characterizations of the extent to which these mechanisms do so are essential for the adoption of such systems. This thesis examines these issues in the context of algorithms for distributed constraint optimization (DCOP), a prominent technique for multi-agent coordination. Prior research on DCOP algorithms has focused on the tradeoffs between efficiency and optimality and largely ignored privacy questions. To characterize the level of privacy protection in DCOP algorithms, this thesis defines four privacy properties: the Global Loss Property, the Maximum Adversary Property, the Maximum Victim Property and the Cost-For-Loss Property. These properties provide a global view of the amount of private information lost during optimization as well as a more local view of the way that the leakage of private information affects individual participants. The thesis analyzes the extent to which existing metrics assess privacy loss as defined by these properties and introduces new methods for measuring those properties not assessed by existing metrics. An experimental analysis of DCOP algorithms shows that the privacy loss of distributed algorithms varies widely and is affected by a range of design decisions, including the topology the agents use for communication, whether the algorithm is asynchronous, and the computational resources of the participants. The thesis establishes that some distributed algorithms, particularly Adopt and DPOP, outperform centralized algorithms on most privacy properties, but not all. However, for all the algorithms studied, some participants suffer unacceptable levels of privacy loss, indicating a need for algorithms with improved privacy-protection properties. This privacy loss is the result of four identified vulnerabilities: initial, intersection, domain and solution. This thesis presents a new algorithm, SSDPOP, that uses the cryptographic technique of secret sharing to eliminate initial vulnerabily, a major source of privacy loss in DCOP. Overall, SSDPOP significantly reduces both global privacy loss and the maximal privacy loss of any individual agent, while introducing only small computational overhead.