Readings in qualitative reasoning about physical systems
Readings in qualitative reasoning about physical systems
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Distinguishing tests for nondeterministic and probabilistic machines
STOC '95 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing
Model counting: a new strategy for obtaining good bounds
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Performing Bayesian inference by weighted model counting
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 1
Journal of Artificial Intelligence Research
Fault-model-based test generation for embedded software
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Formal verification of diagnosability via symbolic model checking
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Task-dependent qualitative domain abstraction
Artificial Intelligence - Special volume on reformulation
Experiment selection for the discrimination of semi-quantitative models of dynamical systems
Artificial Intelligence
Counting solutions of integer programs using unrestricted subtree detection
CPAIOR'08 Proceedings of the 5th international conference on Integration of AI and OR techniques in constraint programming for combinatorial optimization problems
A model-based approach to reactive self-configuring systems
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
An Integrated Diagnostic Development Process for Automotive Engine Control Systems
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
FRACTAL: efficient fault isolation using active testing
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Hi-index | 0.00 |
Testing is the process of stimulating a system with inputs in order to reveal hidden parts of the system state. In the case of non-deterministic systems, the difficulty arises that an input pattern can generate several possible outcomes. Some of these outcomes allow to distinguish between different hypotheses about the system state, while others do not. In this paper, we present a novel approach to find, for non-deterministic systems, modeled as constraints over variables, tests that allow to distinguish among the hypotheses as good as possible. The idea is to assess the quality of a test by determining the ratio of distinguishing (good) and not distinguishing (bad) outcomes. This measure refines previous notions proposed in the literature on model-based testing and can be computed using model counting techniques. We propose and analyze a greedy-type algorithm to solve this test optimization problem, using existing model counters as a building block. We give preliminary experimental results of our method, and discuss possible improvements.