Human evaluation of Kea, an automatic keyphrasing system

  • Authors:
  • Steve Jones;Gordon W. Paynter

  • Affiliations:
  • Department of Computer Science, University of Waikato, Private Bag 3105, Hamilton, New Zealand;Department of Computer Science, University of Waikato, Private Bag 3105, Hamilton, New Zealand

  • Venue:
  • Proceedings of the 1st ACM/IEEE-CS joint conference on Digital libraries
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes an evaluation of the Kea automatic keyphrase extr action algorithm. Tools that automatically identify keyphrases are desirable because document keyphrases have numerous applications in digital library systems, but are costly and time consuming to manually assign. Keyphrase extraction algorithms are usually evaluated by comparison to author-specified keywords, but this methodology has several well-known shortcomings. The results presented in this paper are based on subjective evaluations of the quality and appropriateness of keyphrases by human assessors, and make a number of contributions. First, they validate previous evaluations of Kea that rely on author keywords. Second, they show Kea's performance is comparable to that of similar systems that have been evaluated by human assessors. Finally, they justify the use of author keyphrases as a performance metric by showing that authors generally choose good keywords.