Performance assertion checking
SOSP '93 Proceedings of the fourteenth ACM symposium on Operating systems principles
JML (poster session): notations and tools supporting detailed design in Java
OOPSLA '00 Addendum to the 2000 proceedings of the conference on Object-oriented programming, systems, languages, and applications (Addendum)
Test Driven Development: By Example
Test Driven Development: By Example
Computer
Checking Satisfiability of First-Order Formulas by Incremental Translation to SAT
CAV '02 Proceedings of the 14th International Conference on Computer Aided Verification
Asserting performance expectations
Proceedings of the 2002 ACM/IEEE conference on Supercomputing
Repeated results analysis for middleware regression benchmarking
Performance Evaluation - Performance modelling and evaluation of high-performance parallel and distributed systems
Automated benchmarking and analysis tool
valuetools '06 Proceedings of the 1st international conference on Performance evaluation methodolgies and tools
Developing software performance with the performance refinement and evolution model
WOSP '07 Proceedings of the 6th international workshop on Software and performance
Pip: detecting the unexpected in distributed systems
NSDI'06 Proceedings of the 3rd conference on Networked Systems Design & Implementation - Volume 3
D3S: debugging deployed distributed systems
NSDI'08 Proceedings of the 5th USENIX Symposium on Networked Systems Design and Implementation
Model-Based Validation for Internet Services
SRDS '09 Proceedings of the 2009 28th IEEE International Symposium on Reliable Distributed Systems
Satisfiability Modulo Theories: An Appetizer
Formal Methods: Foundations and Applications
TACAS'08/ETAPS'08 Proceedings of the Theory and practice of software, 14th international conference on Tools and algorithms for the construction and analysis of systems
JUnit in Action, Second Edition
JUnit in Action, Second Edition
All of Statistics: A Concise Course in Statistical Inference
All of Statistics: A Concise Course in Statistical Inference
Precise regression benchmarking with random effects: improving mono benchmark results
EPEW'06 Proceedings of the Third European conference on Formal Methods and Stochastic Models for Performance Evaluation
Automated root cause isolation of performance regressions during software development
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Adaptive deployment in ad-hoc systems using emergent component ensembles: vision paper
Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering
Constructing performance model of JMS middleware platform
Proceedings of the 5th ACM/SPEC international conference on Performance engineering
Hi-index | 0.00 |
Compared to functional unit testing, automated performance testing is difficult, partially because correctness criteria are more difficult to express for performance than for functionality. Where existing approaches rely on absolute bounds on the execution time, we aim to express assertions on code performance in relative, hardware-independent terms. To this end, we introduce Stochastic Performance Logic (SPL), which allows making statements about relative method performance. Since SPL interpretation is based on statistical tests applied to performance measurements, it allows (for a special class of formulas) calculating the minimum probability at which a particular SPL formula holds. We prove basic properties of the logic and present an algorithm for SAT-solver-guided evaluation of SPL formulas, which allows optimizing the number of performance measurements that need to be made. Finally, we propose integration of SPL formulas with Java code using higher-level performance annotations, for performance testing and documentation purposes.