An efficient parallel algorithm for all pairs examination
Proceedings of the 1991 ACM/IEEE conference on Supercomputing
Sun Grid Engine: Towards Creating a Compute Power Grid
CCGRID '01 Proceedings of the 1st International Symposium on Cluster Computing and the Grid
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
Bulk synchronous parallel computing-a paradigm for transportable software
HICSS '95 Proceedings of the 28th Hawaii International Conference on System Sciences
Space and Time Optimal Parallel Sequence Alignments
IEEE Transactions on Parallel and Distributed Systems
Pegasus: A framework for mapping complex scientific workflows onto distributed systems
Scientific Programming
MapReduce: simplified data processing on large clusters
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Dryad: distributed data-parallel programs from sequential building blocks
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
Computer
Falkon: a Fast and Light-weight tasK executiON framework
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
Scaling up Classifiers to Cloud Computers
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Highly scalable genome assembly on campus grids
Proceedings of the 2nd Workshop on Many-Task Computing on Grids and Supercomputers
I/O streaming evaluation of batch queries for data-intensive computational turbulence
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
Data-intensive spatial filtering in large numerical simulation datasets
SC '12 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Design and implementation of GXP make - A workflow system based on make
Future Generation Computer Systems
Hi-index | 0.00 |
Both distributed systems and multicore computers are difficult programming environments. Although the expert programmer may be able to tune distributed and multicore computers to achieve high performance, the non-expert may struggle to achieve a program that even functions correctly. We argue that high level abstractions are an effective way of making parallel computing accessible to the non-expert. An abstraction is a regularly structured framework into which a user may plug in simple sequential programs to create very large parallel programs. By virtue of a regular structure and declarative specification, abstractions may be materialized on distributed, multicore, and distributed multicore systems with robust performance across a wide range of problem sizes. In previous work, we presented the All-Pairs abstraction for computing on distributed systems of single CPUs. In this paper, we extend All-Pairs to multicore systems, and introduce Wavefront, which represents a number of problems in economics and bioinformatics. We demonstrate good scaling of both abstractions up to 32-cores on one machine and hundreds of cores in a distributed system.