Designing the user interface (2nd ed.): strategies for effective human-computer interaction
Designing the user interface (2nd ed.): strategies for effective human-computer interaction
Inside Windows NT
A performance model of system delay and user strategy selection
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Lag as a determinant of human performance in interactive systems
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
Inside Windows 95
How machine delays change user strategies
ACM SIGCHI Bulletin
The design and implementation of the 4.4BSD operating system
The design and implementation of the 4.4BSD operating system
Using latency to evaluate interactive system performance
OSDI '96 Proceedings of the second USENIX symposium on Operating systems design and implementation
Continuous profiling: where have all the cycles gone?
Proceedings of the sixteenth ACM symposium on Operating systems principles
Lottery and stride scheduling: flexibile proportional-share resource management
Lottery and stride scheduling: flexibile proportional-share resource management
Resource containers: a new facility for resource management in server systems
OSDI '99 Proceedings of the third symposium on Operating systems design and implementation
A comparison of Windows driver model latency performance on Windows NT and Windows 98
OSDI '99 Proceedings of the third symposium on Operating systems design and implementation
Advanced Windows: The Developer's Guide to the WIN32 API for Windows NT 3.5 and Windows 95
Advanced Windows: The Developer's Guide to the WIN32 API for Windows NT 3.5 and Windows 95
Computer response time and user performance.
CHI '83 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Self-Monitoring and Self-Adapting Operating Systems
HOTOS '97 Proceedings of the 6th Workshop on Hot Topics in Operating Systems (HotOS-VI)
Measuring windows NT: possibilities and limitations
NT'97 Proceedings of the USENIX Windows NT Workshop on The USENIX Windows NT Workshop 1997
An examination of the run-time performance of GUI creation frameworks
PPPJ '03 Proceedings of the 2nd international conference on Principles and practice of programming in Java
Interactive performance measurement with VNCplay
ATEC '05 Proceedings of the annual conference on USENIX Annual Technical Conference
Fine grained kernel logging with KLogger: experience and insights
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007
Hang analysis: fighting responsiveness bugs
Proceedings of the 3rd ACM SIGOPS/EuroSys European Conference on Computer Systems 2008
Measuring the performance of interactive applications with listener latency profiling
Proceedings of the 6th international symposium on Principles and practice of programming in Java
Analyzing blocking to debug performance problems on multi-core systems
ACM SIGOPS Operating Systems Review
Listener latency profiling: Measuring the perceptible performance of interactive Java applications
Science of Computer Programming
Cloudlet-screen computing: a client-server architecture with top graphics performance
International Journal of Ad Hoc and Ubiquitous Computing
Hi-index | 0.00 |
On the vast majority of today's computers, the dominant form of computation is GUI-based user interaction. In such an environment, the user's perception is the final arbiter of performance. Human-factors research shows that a user's perception of performance is affected by unexpectedly long delays. However, most performance-tuning techniques currently rely on throughput-sensitive benchmarks. While these techniques improve the average performance of the system, they do little to detect or eliminate response-time variabilities—in particular, unexpectedly long delays.We introduce a measurement infrastructure that allows us to improve user-perceived performance by helping us to identify and eliminate the causes of the unexpected long response times that users find unacceptable. We describe TIPME (The Interactive Performance Monitoring Environment), a collection of measurement tools that allowed us to quickly and easily diagnose interactive performance “bugs” in a mature operating system. We present two case studies that demonstrate the effectiveness of our measurement infrastructure. Each of the performance problems we identify drastically affects variability in response time in a mature system, demonstrating that current tuning techniques do not address this class of performance problems.