Automated testing of graphical user interfaces: a new algorithm and challenges

  • Authors:
  • Wontae Choi

  • Affiliations:
  • University of California, Berkeley, Berkeley, CA, USA

  • Venue:
  • Proceedings of the 2013 ACM workshop on Mobile development lifecycle
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Smartphones and tablets with rich graphical user interfaces are becoming increasingly popular. Hundreds of thousands of specialized applications, called apps, are available for such mobile platforms. Manual testing is the most popular technique for testing graphical user interfaces of such apps. Manual testing is often tedious and error-prone. In this talk I will describe a new automated technique, called SwiftHand [2], for generating sequences of test inputs for Android apps. The technique gradually learns a behavioral model of the target app during testing, uses the learned model to generate user in- puts that visit unexplored states of the app, and uses the execution of the app on the generated inputs to refine the model. The technique is inspired by Angluin's L*[1] active learning algorithm and is designed to minimize the cost of learning. We have implemented our testing algorithm in a publicly available tool for Android apps written in Java. Our experimental results show that we can achieve significantly better coverage than traditional random testing and L* -based testing in a given time budget. Our algorithm also reaches peak coverage faster than both random and L*-based testing. The second half of the talk will be focused on challenges to implements this technique for the Android mobile platform. This work is a collaboration with Prof. George Necula and Prof. Koushik Sen.