Introduction to operations research, 4th ed.
Introduction to operations research, 4th ed.
The connection machine
Vector models for data-parallel computing
Vector models for data-parallel computing
Active messages: a mechanism for integrated communication and computation
ISCA '92 Proceedings of the 19th annual international symposium on Computer architecture
Unstructured tree search on SIMD parallel computers: a summary of results
Proceedings of the 1992 ACM/IEEE conference on Supercomputing
A SIMD approach to parallel heuristic search
Artificial Intelligence
Introduction to parallel computing: design and analysis of algorithms
Introduction to parallel computing: design and analysis of algorithms
Randomized parallel algorithms for backtrack search and branch-and-bound computation
Journal of the ACM (JACM)
CMMD: active messages on the CM-5
Parallel Computing - Special issue: message passing interfaces
Scalable load balancing techniques for parallel computers
Journal of Parallel and Distributed Computing
Control strategies for parallel mixed integer branch and bound
Proceedings of the 1994 ACM/IEEE conference on Supercomputing
Load Balancing for Distributed Branch and Bound Algorithms
IPPS '92 Proceedings of the 6th International Parallel Processing Symposium
IPPS '95 Proceedings of the 9th International Symposium on Parallel Processing
Load Balancing in a Network of Transputers
Proceedings of the 2nd International Workshop on Distributed Algorithms
A parallel rendezvous algorithm for interpolation between multiple grids
SC '98 Proceedings of the 1998 ACM/IEEE conference on Supercomputing
State of the Art in Parallel Search Techniques for Discrete Optimization Problems
IEEE Transactions on Knowledge and Data Engineering
A parallel rendezvous algorithm for interpolation between multiple grids
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
This paper describes parallel, non-shared-memoryimplementation of the classical general mixed integer branch and boundalgorithm, with experiments on the CM-5 family of parallel processors. Themain issue in such an implementation is whether task scheduling and certaindata-storage functions should be handled by a single processor, orspread among multiple processors. The centralized approach riskscreating processing bottlenecks, while the more decentralizedimplementations differ more from the fundamental serial algorithm.Extensive computational tests on standard MIPLIB problems comparecentralized, clustered, and fully decentralized task scheduling methods, using a novel combination of random work scattering and rendezvous-basedglobal load balancing, along with a distributed “control by token”technique. Further experiments compare centralized and distributedschemes for storing heuristic “pseudo-cost” branching data. The distributed storage method is based on continual asynchronous reductionalong a tree of redundant storage sites. On average, decentralized taskscheduling appears at least as effective as central control, butpseudo-cost storage should be kept as centralized as possible.