Experimentation in software engineering: an introduction
Experimentation in software engineering: an introduction
Operational Profiles in Software-Reliability Engineering
IEEE Software
Certification of Software Components
IEEE Transactions on Software Engineering
Adaptive Random Testing: The ART of test case diversity
Journal of Systems and Software
Model-Based Testing Using System vs. Test Models - What Is the Difference?
ECBS '10 Proceedings of the 2010 17th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems
Hi-index | 0.00 |
Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility between the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The results show that the startup sequence gives good code coverage values for the selected MIDlets. The feedback method gives somewhat better code coverage than the random method, but requires real-time code coverage measurements, which decreases the run speed of the tests. Conclusion The random method with startup sequence is the best trade-off in the current setting.