Quantitative system performance: computer system analysis using queueing network models
Quantitative system performance: computer system analysis using queueing network models
Performance assertion checking
SOSP '93 Proceedings of the fourteenth ACM symposium on Operating systems principles
Machine Learning
The Case for Application-Specific Benchmarking
HOTOS '99 Proceedings of the The Seventh Workshop on Hot Topics in Operating Systems
Performance debugging for distributed systems of black boxes
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
On the predictability of large transfer TCP throughput
Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications
Capturing, indexing, clustering, and retrieving system history
Proceedings of the twentieth ACM symposium on Operating systems principles
Improved error reporting for software that uses black-box components
Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation
I/O system performance debugging using model-driven anomaly characterization
FAST'05 Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies - Volume 4
Path-based faliure and evolution management
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
Performance modeling and system management for multi-component online services
NSDI'05 Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation - Volume 2
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Automatic misconfiguration troubleshooting with peerpressure
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Modeling the relative fitness of storage
Proceedings of the 2007 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Pip: detecting the unexpected in distributed systems
NSDI'06 Proceedings of the 3rd conference on Networked Systems Design & Implementation - Volume 3
AjaxScope: a platform for remotely monitoring the client-side behavior of web 2.0 applications
Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles
Staged deployment in mirage, an integrated software upgrade testing and distribution system
Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles
Flight data recorder: monitoring persistent-state interactions to improve systems management
OSDI '06 Proceedings of the 7th symposium on Operating systems design and implementation
Why did my pc suddenly slow down?
SYSML'07 Proceedings of the 2nd USENIX workshop on Tackling computer systems problems with machine learning techniques
Debugging in the (very) large: ten years of implementation and experience
Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles
Lightweight, high-resolution monitoring for troubleshooting production systems
OSDI'08 Proceedings of the 8th USENIX conference on Operating systems design and implementation
Using computer simulation to predict the performance of multithreaded programs
ICPE '12 Proceedings of the 3rd ACM/SPEC International Conference on Performance Engineering
scc: cluster storage provisioning informed by application characteristics and SLAs
FAST'12 Proceedings of the 10th USENIX conference on File and Storage Technologies
Automated inference of goal-oriented performance prediction functions
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
ACIC: automatic cloud I/O configurator for HPC applications
SC '13 Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
Performance optimization of deployed software-as-a-service applications
Journal of Systems and Software
Hi-index | 0.00 |
Perhaps surprisingly, no practical performance models exist for popular (and complex) client applications such as Adobe's Creative Suite, Microsoft's Office and Visual Studio, Mozilla, Halo 3, etc. There is currently no tool that automatically answers program developers', IT administrators' and end-users' simple what-if questions like "what happens to the performance of my favorite application X if I upgrade from Windows Vista to Windows 7?". This paper describes our approach towards constructing practical, versatile performance models to address this problem. The goal is to have these models be useful for application developers to help expand application testing coverage and for IT administrators to assist with understanding the performance consequences of a software, hardware or configuration change. This paper's main contributions are in system building and performance modeling. We believe we have built applications that are easier to model because we have proactively instrumented them to export their state and associated metrics. This application-specific monitoring is always on and interesting data is collected from real, "in-the-wild" deployments. The models we are experimenting with are based on statistical techniques. They require no modifications to the OS or applications beyond the above instrumentation, and no explicit a priori model on how an OS or application should behave. We are in the process of learning from models we have constructed for several Microsoft products, including the Office suite, Visual Studio and Media Player. This paper presents preliminary findings from a large user deployment (several hundred thousand user sessions) of these applications that show the coverage and limitations of such models. These findings pushed us to move beyond averages/means and go into some depth into why client application performance has an inherently large variance.