Using the Experience Sampling Method to Evaluate Ubicomp Applications
IEEE Pervasive Computing
How sensitive are online gamers to network quality?
Communications of the ACM - Entertainment networking
Reclaiming network-wide visibility using ubiquitous endsystem monitors
ATEC '06 Proceedings of the annual conference on USENIX '06 Annual Technical Conference
Towards highly reliable enterprise network services via inference of multi-level dependencies
Proceedings of the 2007 conference on Applications, technologies, architectures, and protocols for computer communications
How healthy are today's enterprise networks?
Proceedings of the 8th ACM SIGCOMM conference on Internet measurement
Real-time monitoring of video quality in IP networks
IEEE/ACM Transactions on Networking (TON)
Towards automated performance diagnosis in a large IPTV network
Proceedings of the ACM SIGCOMM 2009 conference on Data communication
Detailed diagnosis in enterprise networks
Proceedings of the ACM SIGCOMM 2009 conference on Data communication
GT: picking up the truth from the ground for internet traffic
ACM SIGCOMM Computer Communication Review
Perspectives on tracing end-hosts: a survey summary
ACM SIGCOMM Computer Communication Review
The Wi-Fi privacy ticker: improving awareness & control of personal information exposure on Wi-Fi
Proceedings of the 12th ACM international conference on Ubiquitous computing
Netalyzr: illuminating the edge network
IMC '10 Proceedings of the 10th ACM SIGCOMM conference on Internet measurement
HostView: annotating end-host performance measurements with user feedback
ACM SIGMETRICS Performance Evaluation Review
Hi-index | 0.00 |
There is much interest recently in doing automated performance diagnosis on user laptops or desktops. One interesting aspect of performance diagnosis that has received little attention is the user perspective on performance. To conduct research on both end-host performance diagnosis and user perception of network and application performance, we designed an end-host data collection tool, called HostView. HostView not only collects network, application and machine level data, but also gathers feedback directly from users. User feedback is obtained via two mechanisms, a system-triggered questionnaire and a user-triggered feedback form, that for example asks users to rate the performance of their network and applications. In this paper, we describe our experience with the first deployment of HostView. Using data from 40 users, we illustrate the diversity of our users, articulate the challenges in this line of research, and report on initial findings in correlating user data to system-level data.