Sorting and searching in the presence of memory faults (without redundancy)
STOC '04 Proceedings of the thirty-sixth annual ACM symposium on Theory of computing
Rounds vs queries trade-off in noisy computation
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
Lower Bounds for the Noisy Broadcast Problem
FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
The price of resiliency: a case study on sorting with memory faults
ESA'06 Proceedings of the 14th conference on Annual European Symposium - Volume 14
Optimal resilient sorting and searching in the presence of memory faults
Theoretical Computer Science
Counting in the Presence of Memory Faults
ISAAC '09 Proceedings of the 20th International Symposium on Algorithms and Computation
Optimal resilient dynamic dictionaries
ESA'07 Proceedings of the 15th annual European conference on Algorithms
Recursive merge sort with erroneous comparisons
Discrete Applied Mathematics
Designing reliable algorithms in unreliable memories
ESA'05 Proceedings of the 13th annual European conference on Algorithms
Experimental study of resilient algorithms and data structures
SEA'10 Proceedings of the 9th international conference on Experimental Algorithms
Data structures resilient to memory faults: an experimental study of dictionaries
SEA'10 Proceedings of the 9th international conference on Experimental Algorithms
Optimal resilient sorting and searching in the presence of memory faults
ICALP'06 Proceedings of the 33rd international conference on Automata, Languages and Programming - Volume Part I
Resilient algorithms and data structures
CIAC'10 Proceedings of the 7th international conference on Algorithms and Complexity
Designing reliable algorithms in unreliable memories
Computer Science Review
Hi-index | 0.01 |
We study networks that can sort n, items even when a large number of the comparators in the network are faulty. We restrict our attention to networks that consist of registers, comparators, and replicators. ( Replicators are used to copy an item from one register to another, and they are assumed to be fault free.) We consider the scenario of both random and worst-case comparator faults, and we follow the general model of destructive comparator failure proposed by Assaf and Upfal [ Proc. 31st IEEE Symposium on Foundations of Computer Science, St. Louis, MO, 1990, pp. 275--284] in which the two outputs of a faulty comparator can fail independently of each other.In the case of random faults, Assaf and Upfal showed how to construct a network with O(n log2 n) comparators that (with high probability) can sort n items even if a constant fraction of the comparators are faulty. Whether the bound on the number of comparators can be improved (to, say, O(n log n)) for sorting (or merging) has remained an interesting open question. We resolve this question in this paper by proving that any n-item sorting or merging network which can tolerate a constant fraction of random failures has $\Omega(n \log^2 n)$ comparators.In the case of worst-case faults, we show that $\Omega(kn \log n)$ comparators are necessary to construct a sorting or merging network that can tolerate up to k worst-case faults. We also show that this bound is tight for k = O(log n). The lower bound is particularly significant since it formally proves that the cost of being tolerant to worst-case failures is very high. Both the lower bound for random faults and the lower bound for worst-case faults are the first nontrivial lower bounds on the size of a fault-tolerant sorting or merging network.