Black-box testing: techniques for functional testing of software and systems
Black-box testing: techniques for functional testing of software and systems
An Approach to Program Testing
ACM Computing Surveys (CSUR)
Software Testing for Conventional and Logic Programming
Software Testing for Conventional and Logic Programming
Art of Software Testing
Logic programming and knowledge representation-the A-prolog perspective
Artificial Intelligence
Building a knowledge base: an example
Annals of Mathematics and Artificial Intelligence
A Comparison of Some Structural Testing Strategies
IEEE Transactions on Software Engineering
ASSAT: computing answer sets of a logic program by SAT solvers
Artificial Intelligence - Special issue on nonmonotonic reasoning
Modularity aspects of disjunctive stable models
Journal of Artificial Intelligence Research
A model-theoretic counterpart of loop formulas
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Random vs. structure-based testing of answer-set programs: an experimental comparison
LPNMR'11 Proceedings of the 11th international conference on Logic programming and nonmonotonic reasoning
ASPIDE: integrated development environment for answer set programming
LPNMR'11 Proceedings of the 11th international conference on Logic programming and nonmonotonic reasoning
Annotating answer-set programs in lana*
Theory and Practice of Logic Programming
Hi-index | 0.00 |
Answer-set programming (ASP) is a well-acknowledged paradigm for declarative problem solving, yet comparably little effort has been spent on the investigation of methods to support the development of answer-set programs. In particular, systematic testing of programs, constituting an integral part of conventional software development, has not been discussed for ASP thus far. In this paper, we fill this gap and develop notions enabling the structural testing of answer-set programs, i.e., we address testing based on test cases that are chosen with respect to the internal structure of a given answer-set program. More specifically, we introduce different notions of coverage that measure to what extent a collection of test inputs covers certain important structural components of the program. In particular, we introduce metrics corresponding to path and branch coverage from conventional testing. We also discuss complexity aspects of the considered notions and give strategies how test inputs that yield increasing (up to total) coverage can be automatically generated.