Noise strategies for improving local search
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
On the conversion between non-binary constraint satisfaction problems
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
The complexity of maximal constraint languages
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Asynchronous Weak-commitment Search for Solving Distributed Constraint Satisfaction Problems
CP '95 Proceedings of the First International Conference on Principles and Practice of Constraint Programming
Distributed breakout revisited
Eighteenth national conference on Artificial intelligence
Distributed Constraint Satisfaction Algorithm for Complex Local Problems
ICMAS '98 Proceedings of the 3rd International Conference on Multi Agent Systems
Using Cooperative Mediation to Solve Distributed Constraint Satisfaction Problems
AAMAS '04 Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1
Using Prior Knowledge to Improve Distributed Hill Climbing
IAT '06 Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology
Distributed Search by Constrained Agents: Algorithms, Performance, Communication (Advanced Information and Knowledge Processing)
Fast learning in networks of locally-tuned processing units
Neural Computation
The breakout method for escaping from local minima
AAAI'93 Proceedings of the eleventh national conference on Artificial intelligence
Getting What You Pay For: Is Exploration in Distributed Hill Climbing Really Worth it?
WI-IAT '10 Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02
Hi-index | 0.00 |
Distributed hill-climbing algorithms are a powerful, practical technique for solving large Distributed Constraint Satisfaction Problems (DSCPs) such as distributed scheduling, resource allocation, and distributed optimization. Although incomplete, an ideal hill-climbing algorithm finds a solution that is very close to optimal while also minimizing the cost (i.e. the required bandwidth, processing cycles, etc.) of finding the solution. The Distributed Stochastic Algorithm (DSA) is a hill-climbing technique that works by having agents change their value with probability p when making that change will reduce the number of constraint violations. Traditionally, the value of p is constant, chosen by a developer at design time to be a value that works for the general case, meaning the algorithm does not change or learn over the time taken to find a solution. In this paper, we replace the constant value of p with different probability distribution functions in the context of solving graph-coloring problems to determine if DSA can be optimized when the probability values are agent-specific. We experiment with non-adaptive and adaptive distribution functions and evaluate our results based on the number of violations remaining in a solution and the total number of messages that were exchanged.