An Efficient Implementation of Edmonds' Algorithm for Maximum Matching on Graphs
Journal of the ACM (JACM)
An evaluation of global address space languages: co-array fortran and unified parallel C
Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming
X10: an object-oriented approach to non-uniform cluster computing
OOPSLA '05 Proceedings of the 20th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications
Approximating weighted matchings in parallel
Information Processing Letters
Linear time local improvements for weighted matchings in graphs
WEA'03 Proceedings of the 2nd international conference on Experimental and efficient algorithms
Linear time 1/2 -approximation algorithm for maximum weighted matching in general graphs
STACS'99 Proceedings of the 16th annual conference on Theoretical aspects of computer science
A parallel approximation algorithm for the weighted maximum matching problem
PPAM'07 Proceedings of the 7th international conference on Parallel processing and applied mathematics
A performance model for fine-grain accesses in UPC
IPDPS'06 Proceedings of the 20th international conference on Parallel and distributed processing
Hi-index | 0.00 |
Efficient parallel algorithms for problems such as maximum weighted matching are central to many areas of combinatorial scientific computing. Manne and Bisseling [13] presented a parallel approximation algorithm which is well suited to distributed memory computers. This algorithm is based on a distributed protocol due to Hoepman [9]. In the current paper, a partitioned global address space (PGAS) implementation is presented. PGAS programmers have the conveniences of using a shared memory model, which provides implicit communication between processes using normal loads and stores. Since the shared memory is partitioned according to the affinity of a process, one is also able to exploit data locality. This paper addresses the main differences between the PGAS and MPI implementations of the Manne-Bisseling algorithm. It highlights some advantages of using the PGAS model such as shorter, simpler code, similarity to the sequential algorithm, and options for fine-grained and coarse-grained communication.