Range partition adaptors: a mechanism for parallelizing STL
ACM SIGAPP Applied Computing Review
HPC++: experiments with the parallel standard template library
ICS '97 Proceedings of the 11th international conference on Supercomputing
Parallel programming: techniques and applications using networked workstations and parallel computers
The Compositional C++ Language Definition
The Compositional C++ Language Definition
Software and the Concurrency Revolution
Queue - Multiprocessors
Task Scheduling for Parallel Systems (Wiley Series on Parallel and Distributed Computing)
Task Scheduling for Parallel Systems (Wiley Series on Parallel and Distributed Computing)
Task Parallelism for Object Oriented Programs
ISPAN '08 Proceedings of the The International Symposium on Parallel Architectures, Algorithms, and Networks
STAPL: an adaptive, generic parallel C++ library
LCPC'01 Proceedings of the 14th international conference on Languages and compilers for parallel computing
Task Parallel Scheduling over Multi-core System
CloudCom '09 Proceedings of the 1st International Conference on Cloud Computing
Object oriented parallelisation of graph algorithms using parallel iterator
AusPDC '10 Proceedings of the Eighth Australasian Symposium on Parallel and Distributed Computing - Volume 107
Hi-index | 0.00 |
With the advent of multi-core processors, desktop application developers must finally face parallel computing and its challenges. A large portion of the computational load in a program rests within iterative computations. In object-oriented languages these are commonly handled using iterators which are inadequate for parallel programming. This paper presents a powerful parallel iterator concept for object-oriented programmers to use for the parallel traversal of a collection of elements. The parallel iterator allows the structure of the program to remain unchanged, it may be used with any collection type (even those inherently sequential) and it supports several scheduling schemes which may even be decided dynamically at run-time. Along with the ease of use, the results reveal negligible overhead and the expected inherent speedup.