Hunting for smells in natural language tests

  • Authors:
  • Benedikt Hauptmann;Maximilian Junker;Sebastian Eder;Lars Heinemann;Rudolf Vaas;Peter Braun

  • Affiliations:
  • TU Munich, Germany;TU Munich, Germany;TU Munich, Germany;CQSE, Germany;Munich Re, Germany;Validas, Germany

  • Venue:
  • Proceedings of the 2013 International Conference on Software Engineering
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Tests are central artifacts of software systems and play a crucial role for software quality. In system testing, a lot of test execution is performed manually using tests in natural language. However, those test cases are often poorly written without best practices in mind. This leads to tests which are not maintainable, hard to understand and inefficient to execute. For source code and unit tests, so called code smells and test smells have been established as indicators to identify poorly written code. We apply the idea of smells to natural language tests by defining a set of common Natural Language Test Smells (NLTS). Furthermore, we report on an empirical study analyzing the extent in more than 2800 tests of seven industrial test suites.