Automated analysis of requirement specifications
ICSE '97 Proceedings of the 19th international conference on Software engineering
Refactoring: improving the design of existing code
Refactoring: improving the design of existing code
XUnit Test Patterns: Refactoring Test Code
XUnit Test Patterns: Refactoring Test Code
Automatic Quality Assessment of SRS Text by Means of a Decision-Tree-Based Text Classifier
QSIC '07 Proceedings of the Seventh International Conference on Quality Software
An Exploratory Study of the Impact of Code Smells on Software Change-proneness
WCRE '09 Proceedings of the 2009 16th Working Conference on Reverse Engineering
Lexicon Bad Smells in Software
WCRE '09 Proceedings of the 2009 16th Working Conference on Reverse Engineering
Can clone detection support quality assessments of requirements specifications?
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
CSMR '11 Proceedings of the 2011 15th European Conference on Software Maintenance and Reengineering
The quamoco product quality modelling and assessment approach
Proceedings of the 34th International Conference on Software Engineering
Hi-index | 0.00 |
Tests are central artifacts of software systems and play a crucial role for software quality. In system testing, a lot of test execution is performed manually using tests in natural language. However, those test cases are often poorly written without best practices in mind. This leads to tests which are not maintainable, hard to understand and inefficient to execute. For source code and unit tests, so called code smells and test smells have been established as indicators to identify poorly written code. We apply the idea of smells to natural language tests by defining a set of common Natural Language Test Smells (NLTS). Furthermore, we report on an empirical study analyzing the extent in more than 2800 tests of seven industrial test suites.