Distributed snapshots: determining global states of distributed systems
ACM Transactions on Computer Systems (TOCS)
The Distributed Constraint Satisfaction Problem: Formalization and Algorithms
IEEE Transactions on Knowledge and Data Engineering
Asynchronous Search with Aggregations
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Secure Distributed Constraint Satisfaction: Reaching Agreement without Revealing Private Information
CP '02 Proceedings of the 8th International Conference on Principles and Practice of Constraint Programming
Asynchronous backtracking without adding links: a new member in the ABT family
Artificial Intelligence - Special issue: Distributed constraint satisfaction
Distributed stable matching problems with ties and incomplete lists
CP'06 Proceedings of the 12th international conference on Principles and Practice of Constraint Programming
Privacy in Distributed Meeting Scheduling
Proceedings of the 2008 conference on Artificial Intelligence Research and Development: Proceedings of the 11th International Conference of the Catalan Association for Artificial Intelligence
Protecting privacy through distributed computation in multi-agent decision making
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
DisFC is an ABT-like algorithm that, instead of sending the value taken by the high priority agent, it sends the domain of the low priority agent that is compatible with that value. With this strategy, plus the use of sequence numbers, some privacy level is achieved. In particular, each agent knows its value in the solution, but ignores the values of the others. However, the idea of sending the whole compatible domain each time an agent changes its value may cause a privacy loss on shared constraints that was initially overlooked. To solve this issue, we propose DisFClies, an algorithm that works like DisFC but it may lie about the compatible domains of other agents. It requires a single extra condition: if an agent sends a lie, it has to tell the truth in finite time afterwards. We prove that the algorithm is sound, complete and terminates. We provide experimental results on the increment in privacy achieved, at the extra cost of more search.