Pinpoint: Problem Determination in Large, Dynamic Internet Services
DSN '02 Proceedings of the 2002 International Conference on Dependable Systems and Networks
Bug isolation via remote program sampling
PLDI '03 Proceedings of the ACM SIGPLAN 2003 conference on Programming language design and implementation
Performance debugging for distributed systems of black boxes
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
Scalable statistical bug isolation
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation
Automated known problem diagnosis with event traces
Proceedings of the 1st ACM SIGOPS/EuroSys European Conference on Computer Systems 2006
Path-based faliure and evolution management
NSDI'04 Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Using magpie for request extraction and workload modelling
OSDI'04 Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6
Live monitoring: using adaptive instrumentation and analysis to debug and maintain web applications
HOTOS'07 Proceedings of the 11th USENIX workshop on Hot topics in operating systems
Statistical Debugging Using Latent Topic Models
ECML '07 Proceedings of the 18th European conference on Machine Learning
Dustminer: troubleshooting interactive complexity bugs in sensor networks
Proceedings of the 6th ACM conference on Embedded network sensor systems
HOLMES: Effective statistical debugging via efficient path profiling
ICSE '09 Proceedings of the 31st International Conference on Software Engineering
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
Bootstrapping energy debugging on smartphones: a first look at energy bugs in mobile devices
Proceedings of the 10th ACM Workshop on Hot Topics in Networks
Proceedings of the 10th international conference on Mobile systems, applications, and services
Collaborative energy debugging for mobile devices
HotDep'12 Proceedings of the Eighth USENIX conference on Hot Topics in System Dependability
AppInsight: mobile app performance monitoring in the wild
OSDI'12 Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation
On identifying user complaints of iOS apps
Proceedings of the 2013 International Conference on Software Engineering
Carat: collaborative energy diagnosis for mobile devices
Proceedings of the 11th ACM Conference on Embedded Networked Sensor Systems
Capturing mobile experience in the wild: a tale of two apps
Proceedings of the ninth ACM conference on Emerging networking experiments and technologies
Hi-index | 0.00 |
There are a lot of applications that run on modern mobile operating systems. Inevitably, some of these applications fail in the hands of users. Diagnosing a failure to identify the culprit, or merely reproducing that failure in the lab is difficult. To get insight into this problem, we interviewed developers of five mobile applications and analyzed hundreds of trouble tickets. We find that support for diagnosing unexpected application behavior is lacking across major mobile platforms. Even when developers implement heavy-weight logging during controlled trials, they do not discover many dependencies that are then stressed in the wild. They are also not well-equipped to understand how to monitor the large number of dependencies without impacting the phone's limited resources such as CPU and battery. Based on these findings, we argue for three fundamental changes to failure reporting on mobile phones. The first is spatial spreading, which exploits the large number of phones in the field by spreading the monitoring work across them. The second is statistical inference, which builds a conditional distribution model between application behavior and its dependencies in the presence of partial information. The third is adaptive sampling, which dynamically varies what each phone monitors, to adapt to both the varying population of phones and what is being learned about each failure. We propose a system called MobiBug that combines these three techniques to simplify the task of diagnosing mobile applications.