Upper and lower time bounds for parallel random access machines without simultaneous writes
SIAM Journal on Computing
Knowledge and common knowledge in a Byzantine environment I: crash failures
Proceedings of the 1986 Conference on Theoretical aspects of reasoning about knowledge
PODC '88 Proceedings of the seventh annual ACM Symposium on Principles of distributed computing
A hundred impossibility proofs for distributed computing
Proceedings of the eighth annual ACM Symposium on Principles of distributed computing
Reaching Agreement in the Presence of Faults
Journal of the ACM (JACM)
Issues of fault tolerance in concurrent computations (databases, reliability, transactions, agreement protocols, distributed computing)
Time-optimal message-efficient work performance in the presence of faults
PODC '94 Proceedings of the thirteenth annual ACM symposium on Principles of distributed computing
Robust gossiping with an application to consensus
Journal of Computer and System Sciences
Fast scalable deterministic consensus for crash failures
Proceedings of the 28th ACM symposium on Principles of distributed computing
Locally scalable randomized consensus for synchronous crash failures
Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures
Distributed agreement with optimal communication complexity
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Time and communication efficient consensus for crash failures
DISC'06 Proceedings of the 20th international conference on Distributed Computing
On the message complexity of indulgent consensus
DISC'07 Proceedings of the 21st international conference on Distributed Computing
Hi-index | 0.00 |
The binary Byzantine Agreement problem requires n-1 receivers to agree on the binary value broadcast by a sender even when some of these n processes may be faulty. We investigate the message complexity of protocols that solve this problem in the case of crash failures. In particular, we derive matching upper and lower bounds on the total, worst and average case number of messages needed in the failure-free executions of such protocols.More specifically, we prove that any protocol that tolerates up to t faulty processes requires a total of at least n+t-1 messages in its failure-free executions - and, therefore, at least ⌈(n + t-1)/2⌉ messages in the worst case and min(P0, P1).(n + t-1) messages in the average case, where Pv is the probability that the value of the bit that the sender wants to broadcast is v. We also give protocols that solve the problem using only the minimum number of messages for these three complexity measures. These protocols can be implemented by using 1-bit messages. Since a lower bound on the number of messages is also a lower bound on the number of message bits, this means that the above tight bounds on the number of messages are also tight bounds on the number of message bits.