Sabotage-Tolerance Mechanisms for Volunteer Computing Systems
CCGRID '01 Proceedings of the 1st International Symposium on Cluster Computing and the Grid
Improving Performance via Computational Replication on a Large-Scale Computational Grid
CCGRID '03 Proceedings of the 3st International Symposium on Cluster Computing and the Grid
The architecture of Tandem's NonStop system
ACM '81 Proceedings of the ACM '81 conference
CCGRID '02 Proceedings of the 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid
A System for Ensuring Data Integrity in Grid Environments
ITCC '04 Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC'04) Volume 2 - Volume 2
BOINC: A System for Public-Resource Computing and Storage
GRID '04 Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing
Predictor@Home: A "Protein Structure Prediction Supercomputer" Based on Public-Resource Computing
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 7 - Volume 08
Predictor@Home: A "Protein Structure Prediction Supercomputer" Based on Public-Resource Computing
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 7 - Volume 08
Predictor@Home: A "Protein Structure Prediction Supercomputer' Based on Global Computing
IEEE Transactions on Parallel and Distributed Systems
Sabotage-tolerance and trust management in desktop grid computing
Future Generation Computer Systems
SimBA: A Discrete Event Simulator for Performance Prediction of Volunteer Computing Projects
Proceedings of the 21st International Workshop on Principles of Advanced and Distributed Simulation
TAPIA '07 Proceedings of the 2007 conference on Diversity in computing
A distributed evolutionary method to design scheduling policies for volunteer computing
Proceedings of the 5th conference on Computing frontiers
A distributed evolutionary method to design scheduling policies for volunteer computing
ACM SIGMETRICS Performance Evaluation Review
ICCS '09 Proceedings of the 9th International Conference on Computational Science: Part I
Modeling Job Lifespan Delays in Volunteer Computing Projects
CCGRID '09 Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid
Collusion Detection for Grid Computing
CCGRID '09 Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid
Group-based adaptive result certification mechanism in Desktop Grids
Future Generation Computer Systems
An authentication protocol in web-computing
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Computational efficiency and practical implications for a client grid
HPCC'06 Proceedings of the Second international conference on High Performance Computing and Communications
Characterizing result errors in internet desktop grids
Euro-Par'07 Proceedings of the 13th international Euro-Par conference on Parallel Processing
Achieving reliability in master-worker computing via evolutionary dynamics
Euro-Par'12 Proceedings of the 18th international conference on Parallel Processing
Hi-index | 0.00 |
Distributed computing using PCs volunteered by the public can provide high computing capacity at low cost. However, computational results from volunteered PCs have a non-negligible error rate, so result validation is needed to ensure overall correctness. A generally applicable technique is "redundant computing", in which each computation is done on several separate computers, and results are accepted only if there is a consensus. Variations in numerical processing between computers (due to a variety of hardware and software factors) can lead to different results for the same task. In some cases, this can be addressed by doing a "fuzzy comparison" of results, so that two results are considered equivalent if they agree within given tolerances. However, this approach is not applicable to applications that are "divergent", that is, for which small numerical differences can produce large differences in the results. In this paper we examine the problem of validating results of divergent applications. We present a novel approach called Homogeneous Redundancy (HR), in which the redundant instances of a computation are dispatched to numerically identical computers, allowing strict equality comparison of the results. HR has been deployed in Predictor@home, a world-wide community effort to predict protein structure from sequence.