The design and analysis of parallel algorithms
The design and analysis of parallel algorithms
A Parallel Time/Hardware Tradeoff T.H=O(2/sup n/2/) for the Knapsack Problem
IEEE Transactions on Computers
A parallel two-list algorithm for the knapsack problem
Parallel Computing
Fast and scalable parallel algorithms for knapsack-like problems
Journal of Parallel and Distributed Computing
Computing Partitions with Applications to the Knapsack Problem
Journal of the ACM (JACM)
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Comments on parallel algorithms for the knapsack problem
Parallel Computing
An efficient parallel algorithm for solving the Knapsack problem on hypercubes
Journal of Parallel and Distributed Computing
Optimal parallel algorithm for the knapsack problem without memory conflicts
Journal of Computer Science and Technology
A Parallel Algorithm for the Knapsack Problem
IEEE Transactions on Computers
An optimal parallelization of the two-list algorithm of cost O(2n/2)
Parallel Computing
Time-memory-processor trade-offs
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We show that developing an optimal parallelization of the two-list algorithm is much easier than we once thought. All it takes is to observe that the steps of the search phase of the two-list algorithm are closely related to the steps of a merge procedure for merging two sorted lists, and we already know how to parallelize merge efficiently. Armed with this observation, we present an optimal and scalable parallel two-list algorithm that is easy to understand and analyze, while it achieves the best known range of processor-time tradeoffs for this problem. In particular, our algorithm based on a CREW PRAM model takes time O(2n/2−α) using 2α processors, for 0≤α≤n/2−2logn+2.