Applying Text Classification Algorithms in Web Services Robustness Testing

  • Authors:
  • Nuno Laranjeiro;Rui Oliveira;Marco Vieira

  • Affiliations:
  • -;-;-

  • Venue:
  • SRDS '10 Proceedings of the 2010 29th IEEE Symposium on Reliable Distributed Systems
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Testing web services for robustness is an effective way of disclosing software bugs. However, when executing robustness tests, a very large amount of service responses has to be manually classified to distinguish regular responses from responses that indicate robustness problems. Besides requiring a large amount of time and effort, this complex classification process can easily lead to errors resulting from the human intervention in such a laborious task. Text classification algorithms have been applied successfully in many contexts (e.g., spam identification, text categorization, etc) and are considered a powerful tool for the successful automation of several classification-based tasks. In this paper we present a study on the applicability of five widely used text classification algorithms in the context of web services robustness testing. In practice, we assess the effectiveness of Support Vector Machines, Na茂ve Bayes, Large Linear Classification, K-nearest neighbor (Ibk), and Hyperpipes in classifying web services responses. Results indicate that these algorithms can be effectively used to automate the identification of robustness issues while reducing human intervention. However, in all mechanisms there are cases of misclassified responses, which means that there is space for improvement.