A relational approach to monitoring complex systems
ACM Transactions on Computer Systems (TOCS)
The C programming language
Performance assertion checking
SOSP '93 Proceedings of the fourteenth ACM symposium on Operating systems principles
Performance debugging using parallel performance predicates
PADD '93 Proceedings of the 1993 ACM/ONR workshop on Parallel and distributed debugging
Continuous profiling: where have all the cycles gone?
ACM Transactions on Computer Systems (TOCS)
Efficient management of parallelism in object-oriented numerical software libraries
Modern software tools for scientific computing
A scalable cross-platform infrastructure for application performance tuning using hardware counters
Proceedings of the 2000 ACM/IEEE conference on Supercomputing
ICPP '98 Proceedings of the 1998 International Conference on Parallel Processing
Autopilot: Adaptive Control of Distributed Applications
HPDC '98 Proceedings of the 7th IEEE International Symposium on High Performance Distributed Computing
Prediction and Adaptation in Active Harmony
HPDC '98 Proceedings of the 7th IEEE International Symposium on High Performance Distributed Computing
Modeling application performance by convolving machine signatures with application profiles
WWC '01 Proceedings of the Workload Characterization, 2001. WWC-4. 2001 IEEE International Workshop
LCPC'01 Proceedings of the 14th international conference on Languages and compilers for parallel computing
Evolutionary performance-oriented development of parallel programs by composition of components
Proceedings of the 5th international workshop on Software and performance
Performance assertions for mobile devices
Proceedings of the 2006 international symposium on Software testing and analysis
Empirical optimization for a sparse linear solver: a case study
International Journal of Parallel Programming - Special issue: The next generation software program
Knowledge support and automation for performance analysis with PerfExplorer 2.0
Scientific Programming - Large-Scale Programming Tools and Environments
Capturing performance knowledge for automated analysis
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
Performance Validation on Multicore Mobile Devices
Verified Software: Theories, Tools, Experiments
Automatic Communication Performance Debugging in PGAS Languages
Languages and Compilers for Parallel Computing
Adaptive Application Composition in Quantum Chemistry
QoSA '09 Proceedings of the 5th International Conference on the Quality of Software Architectures: Architectures for Adaptive Software Systems
A compiler approach to performance prediction using empirical-based modeling
ICCS'03 Proceedings of the 2003 international conference on Computational science: PartIII
Proceedings of the 7th international conference on Autonomic computing
Automatic Phase Detection and Structure Extraction of MPI Applications
International Journal of High Performance Computing Applications
A code isolator: isolating code fragments from large programs
LCPC'04 Proceedings of the 17th international conference on Languages and Compilers for High Performance Computing
Capturing performance assumptions using stochastic performance logic
ICPE '12 Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
Understanding and detecting real-world performance bugs
Proceedings of the 33rd ACM SIGPLAN conference on Programming Language Design and Implementation
Automatic structure extraction from MPI applications tracefiles
Euro-Par'07 Proceedings of the 13th international Euro-Par conference on Parallel Processing
Hi-index | 0.00 |
Traditional techniques for performance analysis provide a means for extracting and analyzing raw performance information from applications. Users then compare this raw data to their performance expectations for application constructs. This comparison can be tedious for the scale of today's architectures and software systems. To address this situation, we present a methodology and prototype that allows users to assert performance expectations explicitly in their source code using performance assertions. As the application executes, each performance assertion in the application collects data implicitly to verify the assertion. By allowing the user to specify a performance expectation with individual code segments, the runtime system can jettison raw data for measurements that pass their expectation, while reacting to failures with a variety of responses. We present several compelling uses of performance assertions with our operational prototype, including raising a performance exception, validating a performance model, and adapting an algorithm empirically at runtime.