Detecting knowledge base inconsistencies using automated generation of text and examples

  • Authors:
  • Vibhu O. Mittal;Johanna D. Moore

  • Affiliations:
  • Learning Research & Development Center and Department of Computer Science, University of Pittsburgh, Pittsburgh, PA;Learning Research & Development Center and Department of Computer Science, University of Pittsburgh, Pittsburgh, PA

  • Venue:
  • AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

Verifying the fidelity of domain representation in large knowledge bases (KBs) is a difficult problem: domain experts are typically not experts in knowledge representation languages, and as knowledge bases grow more complex, visual inspection of the various terms and their abstract definitions, their interrelationships and the limiting, boundary cases becomes much harder. This paper presents an approach to help verify and refine abstract term definitions in knowledge bases. It assumes that it is easier for a domain expert to determine the correctness of individual concrete examples than it is to verify and correct all the ramifications of an abstract, intensional specification. To this end, our approach presents the user with an interface in which abstract terms in the KB are described using examples and natural language generated from the underlying domain representation. Problems in the KB are therefore manifested as problems in the generated description. The user can then highlight specific examples or parts of the explanation that seem problematic. The system reasons about the underlying domain model by using the discourse plan generated for the description. This paper briefly describes the working of the system and illustrates three possible types of problem manifestations using an example of a specification of floating-point numbers in Lisp.