The computational complexity of abduction
Artificial Intelligence - Special issue on knowledge representation
Reconstructive expert system explanation
Artificial Intelligence
Artificial Intelligence - Special volume on natural language processing
CYC: a large-scale investment in knowledge infrastructure
Communications of the ACM
Explaining reasoning in description logics
Explaining reasoning in description logics
Development and Verification of Rule Based Systems -- A Survey of Developers
RuleML '08 Proceedings of the International Symposium on Rule Representation, Interchange and Reasoning on the Web
Explaining answers from the Semantic Web: the Inference Web approach
Web Semantics: Science, Services and Agents on the World Wide Web
Cue: a framework for generating meaningful feedback in XACML
Proceedings of the 3rd ACM workshop on Assurable and usable security configuration
Learning by reading: an experiment in text analysis
TSD'06 Proceedings of the 9th international conference on Text, Speech and Dialogue
Hi-index | 0.00 |
When a query to a knowledge-based system fails and returns "unknown", users are confronted with a problem: Is relevant knowledge missing or incorrect? Is there a problem with the inference engine? Was the query ill-conceived? Finding the culprit in a large and complex knowledge base can be a hard and laborious task for knowledge engineers and might be impossible for non-expert users. To support such situations we developed a new tool called "WhyNot" as part of the PowerLoom knowledge representation and reasoning system. To debug a failed query, WhyNot tries to generate a small set of plausible partial proofs that can guide the user to what knowledge might have been missing, or where the system might have failed to make a relevant inference. A first version of the system has been deployed to help debug queries to a version of the Cyc knowledge base containing over 1,000,000 facts and over 35,000 rules.