Applying classification techniques to remotely-collected program execution data

  • Authors:
  • Murali Haran;Alan Karr;Alessandro Orso;Adam Porter;Ashish Sanil

  • Affiliations:
  • Penn State University, University Park, PA;National Institute of Statistical Sciences, Triangle Park, NC;Georgia Inst. of Technology, Atlanta, GA;University of Maryland, College Park, MD;National Institute of Statistical Sciences, Triangle Park, NC

  • Venue:
  • Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

There is an increasing interest in techniques that support measurement and analysis of fielded software systems. One of the main goals of these techniques is to better understand how software actually behaves in the field. In particular, many of these techniques require a way to distinguish, in the field, failing from passing executions. So far, researchers and practitioners have only partially addressed this problem: they have simply assumed that program failure status is either obvious (i.e., the program crashes) or provided by an external source (e.g., the users). In this paper, we propose a technique for automatically classifying execution data, collected in the field, as coming from either passing or failing program runs. (Failing program runs are executions that terminate with a failure, such as a wrong outcome.) We use statistical learning algorithms to build the classification models. Our approach builds the models by analyzing executions performed in a controlled environment (e.g., test cases run in-house) and then uses the models to predict whether execution data produced by a fielded instance were generated by a passing or failing program execution. We also present results from an initial feasibility study, based on multiple versions of a software subject, in which we investigate several issues vital to the applicability of the technique. Finally, we present some lessons learned regarding the interplay between the reliability of classification models and the amount and type of data collected.