Using Prior Knowledge to Improve Distributed Hill Climbing

  • Authors:
  • Roger Mailler

  • Affiliations:
  • Artificial Intelligence Center, USA

  • Venue:
  • IAT '06 Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Distributed Probabilistic Protocol (DPP) is a new, approximate algorithm for solving Distributed Constraint Satisfaction Problems (DCSPs) that exploits prior knowledge to improve the algorithm's convergence speed and efficiency. This protocol can most easily be thought of as a hybrid between the Distributed Breakout Algorithm (DBA) and the Distributed Stochastic Algorithm (DSA), because like DBA, agents exchange "improve" messages to control the search process, but like DSA, actually change their values based on a random probability. DPP improves upon these algorithms by having the agents exchange probability distributions that describe the likelihood of having particular "improve" values. These distributions can then be used by an agent to estimate the probability of having the best improve value among its neighbors or to compute the error caused by not informing other agents of changes to its improve value. This causes the protocol to use considerably fewer messages than both DBA and DSA, does not require a user to choose a randomness parameter like DSA, and allows DPP to more quickly converge onto good solutions. Overall, this protocol is empirically shown to be very competitive with both DSA and DBA.