Evaluating the quality of a knowledge base populated from text

  • Authors:
  • James Mayfield;Tim Finin

  • Affiliations:
  • Johns Hopkins University;University of Maryland, Baltimore County

  • Venue:
  • AKBC-WEKEX '12 Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The steady progress of information extraction systems has been helped by sound methodologies for evaluating their performance in controlled experiments. Annual events like MUC, ACE and TAC have developed evaluation approaches enabling researchers to score and rank their systems relative to reference results. Yet these evaluations have only assessed component technologies needed by a knowledge base population system; none has required the construction of a knowledge base that is then evaluated directly. We describe an approach to the direct evaluation of a knowledge base and an instantiation that will be used in a 2012 TAC Knowledge Base Population track.