Quantitative system performance: computer system analysis using queueing network models
Quantitative system performance: computer system analysis using queueing network models
Performance assertion checking
SOSP '93 Proceedings of the fourteenth ACM symposium on Operating systems principles
Machine Learning
The Case for Application-Specific Benchmarking
HOTOS '99 Proceedings of the The Seventh Workshop on Hot Topics in Operating Systems
Performance debugging for distributed systems of black boxes
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
On the predictability of large transfer TCP throughput
Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications
Capturing, indexing, clustering, and retrieving system history
Proceedings of the twentieth ACM symposium on Operating systems principles
Improved error reporting for software that uses black-box components
Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation
I/O system performance debugging using model-driven anomaly characterization
FAST'05 Proceedings of the 4th conference on USENIX Conference on File and Storage Technologies - Volume 4
Path-based faliure and evolution management
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
Performance modeling and system management for multi-component online services
NSDI'05 Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation - Volume 2
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Automatic misconfiguration troubleshooting with peerpressure
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Modeling the relative fitness of storage
Proceedings of the 2007 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
Pip: detecting the unexpected in distributed systems
NSDI'06 Proceedings of the 3rd conference on Networked Systems Design & Implementation - Volume 3
AjaxScope: a platform for remotely monitoring the client-side behavior of web 2.0 applications
Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles
Why did my pc suddenly slow down?
SYSML'07 Proceedings of the 2nd USENIX workshop on Tackling computer systems problems with machine learning techniques
Debugging in the (very) large: ten years of implementation and experience
Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles
Lightweight, high-resolution monitoring for troubleshooting production systems
OSDI'08 Proceedings of the 8th USENIX conference on Operating systems design and implementation
Hi-index | 0.00 |
Perhaps surprisingly, no practical performance models exist for popular (and complex) client applications such as Adobe's Designer suite, Microsoft's Office suite and Visual Studio, Mozilla, Halo 3, etc. There is currently no tool that automatically answers program developers', IT administrators' and end-users' simple what-if questions like "what happens to the performance of my favorite application X if I upgrade from Windows Vista to Windows 7?". This paper describes directions we are taking for constructing practical, versatile performance models to address this problem. The directions we have taken have two paths. The first path involves instrumenting applications better to export their state and associated metrics. This application-specific monitoring is always on and interesting data is collected from real, "in-the-wild" deployments. The second path involves statistical modeling techniques. The models we are experimenting with require no modifications to the OS or applications beyond the above instrumentation, and no explicit a priori model on how an OS or application should behave. We are in the process of learning from models we have constructed for several Microsoft products, including the Office suite, Visual Studio and Media Player. This paper presents preliminary findings from a large user deployment (several hundred thousand user sessions) of these applications that show the coverage and limitations of such models. Early indications from this work point towards future modeling strategies based on large amounts of data collected in the field. We present our thoughts on what this could imply for the SIGMETRICS community.