Automatic determination of grain size for efficient parallel processing
Communications of the ACM - Special issue: multiprocessing
Profile guided code positioning
PLDI '90 Proceedings of the ACM SIGPLAN 1990 conference on Programming language design and implementation
IMPACT: an architectural framework for multiple-instruction-issue processors
ISCA '91 Proceedings of the 18th annual international symposium on Computer architecture
Using profile information to assist classic code optimizations
Software—Practice & Experience
Profile-guided automatic inline expansion for C programs
Software—Practice & Experience
Profile-driven compilation
Parallel processing: a smart compiler and a dumb machine
SIGPLAN '84 Proceedings of the 1984 SIGPLAN symposium on Compiler construction
Grain Size Determination for Parallel Processing
IEEE Software
A taxonomy of scheduling in general-purpose distributed computing systems
IEEE Transactions on Software Engineering
Compile-Time Estimation of Communication Costs on Multicomputers
IPPS '92 Proceedings of the 6th International Parallel Processing Symposium
Adaptive optimization for self: reconciling high performance with exploratory programming
Adaptive optimization for self: reconciling high performance with exploratory programming
Hi-index | 0.00 |
This paper discusses the use of run-time feedback for optimizing the execution of parallel computations. Four levels of feedback are distinguished, and the applicability and limitations of each are discussed. A two-part scheduling paradigm known as SEDIA (Static Exploration/Dynamic Instantiation and Activation) that addresses these limitations to perform robust scheduling in the presence of variant run-time behavior is introduced. A key component of this scheduling paradigm is an abstract model of run-time information fidelity, which has evolved from our previous work in the area of Trace Recovery, employing control- theoretic concepts.