Lipschitzian optimization without the Lipschitz constant
Journal of Optimization Theory and Applications
Transparent adaptive parallelism on NOWs using OpenMP
Proceedings of the seventh ACM SIGPLAN symposium on Principles and practice of parallel programming
MPI-The Complete Reference, Volume 1: The MPI Core
MPI-The Complete Reference, Volume 1: The MPI Core
Dynamic Data Structures for a Direct Search Algorithm
Computational Optimization and Applications
DyRecT: Software Support for Adaptive Parallelism on NOWs
IPDPS '00 Proceedings of the 15 IPDPS 2000 Workshops on Parallel and Distributed Processing
Accelerating configuration interaction calculations for nuclear structure
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Performance Modeling and Analysis of a Massively Parallel Direct - Part 1
International Journal of High Performance Computing Applications
Performance Modeling and Analysis of a Massively Parallel Direct - Part 2
International Journal of High Performance Computing Applications
Algorithm 897: VTDIRECT95: Serial and parallel codes for the global optimization algorithm direct
ACM Transactions on Mathematical Software (TOMS)
Using MPI to Implement Scalable Libraries
Proceedings of the 16th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Proceedings of the 16th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Cost optimized provisioning of elastic resources for application workflows
Future Generation Computer Systems
Hi-index | 0.00 |
There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, the modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.