Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
The high performance Fortran handbook
The high performance Fortran handbook
Matrix computations (3rd ed.)
ScaLAPACK user's guide
IEEE Transactions on Software Engineering
Building an optimizing compiler
Building an optimizing compiler
Seven good reasons for mobile agents
Communications of the ACM
Messages versus messengers in distributed programming
Journal of Parallel and Distributed Computing
A multigrid tutorial: second edition
A multigrid tutorial: second edition
Parallel programming in OpenMP
Parallel programming in OpenMP
Parallel and Distributed Computing: A Survey of Models, Paradigms and Approaches
Parallel and Distributed Computing: A Survey of Models, Paradigms and Approaches
IEEE Concurrency
Mobile Agents - The Right Vehicle for Distributed Sequential Computing
HiPC '02 Proceedings of the 9th International Conference on High Performance Computing
Distributed Sequential Numerical Computing Using Mobile Agents: Moving Code to Data
ICPP '02 Proceedings of the 2001 International Conference on Parallel Processing
The Data Parallel Programming Model: Foundations, HPF Realization, and Scientific Applications
Automatic State Capture of Self-Migrating Computations in MESSENGERS
MA '98 Proceedings of the Second International Workshop on Mobile Agents
Mobile Agents: Are They a Good Idea?
MOS '96 Selected Presentations and Invited Papers Second International Workshop on Mobile Object Systems - Towards the Programmable Internet
Mobile Agents: Motivations and State-of-the-Art Systems
Mobile Agents: Motivations and State-of-the-Art Systems
Load Balancing vs. Locality Management in Shared-Memory Multiprocessors
Load Balancing vs. Locality Management in Shared-Memory Multiprocessors
r3: Resilient Random Regular Graphs
DISC '08 Proceedings of the 22nd international symposium on Distributed Computing
Choosing colors for geometric graphs via color space embeddings
GD'06 Proceedings of the 14th international conference on Graph drawing
Mobile pipelines: parallelizing left-looking algorithms using navigational programming
HiPC'05 Proceedings of the 12th international conference on High Performance Computing
Hi-index | 0.00 |
Message Passing (MP) and Distributed Shared Memory (DSM) are the two most common approaches to distributed parallel computing. MP is difficult to use, whereas DSM is not scalable. Performance scalability and ease of programming can be achieved at the same time by using navigational programming (NavP). This approach combines the advantages of MP and DSM, and it balances convenience and flexibility. Similar to MP, NavP suggests to its programmers the principle of pivot-computes and hence is efficient and scalable. Like DSM, NavP supports incremental parallelization and shared variable programming and is therefore easy to use. The implementation and performance analysis of real-world algorithms, namely parallel Jacobi iteration and parallel Cholesky factorization, presented in this paper supports the claim that the NavP approach is better suited for general-purpose parallel distributed programming than either MP or DSM.