Monit: a performance monitoring tool for parallel and pseudo-parallel programs
SIGMETRICS '87 Proceedings of the 1987 ACM SIGMETRICS conference on Measurement and modeling of computer systems
Quartz: a tool for tuning parallel program performance
SIGMETRICS '90 Proceedings of the 1990 ACM SIGMETRICS conference on Measurement and modeling of computer systems
OSF/Motif: user's guide
Analyzing PICL trace data with MEDEA
Proceedings of the 7th international conference on Computer performance evaluation : modelling techniques and tools: modelling techniques and tools
Measurement-based approach to workload characterization
Tutorial papers of the 7th international conference on modelling techniques and tools for computer performance evaluation on Performance and reliability evaluation
Clustering Algorithms
Proceedings of the 14th international conference on Supercomputing
Real-time immersive performance visualization and steering
ACM SIGGRAPH Computer Graphics
Performance Prediction: A Case Study Using a Scalable Shared-Virtual-Memory Machine
IEEE Parallel & Distributed Technology: Systems & Technology
Euro-Par '02 Proceedings of the 8th International Euro-Par Conference on Parallel Processing
The HPF+ Project: Supporting HPF for Advanced Industrial Applications
Euro-Par '99 Proceedings of the 5th International Euro-Par Conference on Parallel Processing
Performance Issues in Parallel Processing Systems
Performance Evaluation: Origins and Directions
VFC: The Vienna Fortran Compiler
Scientific Programming
Hi-index | 0.00 |
The performance that parallel systems can achieve depends strictly on the match between workload and system characteristics. Because of these dependencies, the use of experimental approaches is required. Measurements collected at run-time by monitoring tools must be processed for selecting the most significant information able to capture the workload's behavior and to explain its performance. Developers of parallel systems and parallel programs must take systematic approaches for analyzing this large amount of raw data.The Medea (Measurements Description and Evaluation,) software tool provides a user-friendly environment for systematically applying workload characterization techniques to raw data produced by monitoring parallel programs. Medea's models are especially useful for program tuning and performance debugging, for testing alternative system configurations and for supporting benchmarking studies.This article describes the Medea tool for the evaluation of the performance of three applications; a kernel that uses the Jacobi relaxation method and two real-life modeling programs. The authors used the Jacobi kernel to study the influence on the performance of two different data-distribution policies adopted by parallelizing compilers. A climate model study aided in the evaluation of communication protocols as a function of the characteristics of individual parallel systems. The performance debugging studies carried out on a turbulent flow model of stellar plasmas outlines the portions of the code where tuning actions must be focused.