Applying machine translation evaluation techniques to textual CBR

  • Authors:
  • Ibrahim Adeyanju;Nirmalie Wiratunga;Robert Lothian;Susan Craw

  • Affiliations:
  • School of Computing, Robert Gordon University, Aberdeen, Scotland, UK;School of Computing, Robert Gordon University, Aberdeen, Scotland, UK;School of Computing, Robert Gordon University, Aberdeen, Scotland, UK;School of Computing, Robert Gordon University, Aberdeen, Scotland, UK

  • Venue:
  • ICCBR'10 Proceedings of the 18th international conference on Case-Based Reasoning Research and Development
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The need for automated text evaluation is common to several AI disciplines. In this work, we explore the use of Machine Translation (MT) evaluation metrics for Textual Case Based Reasoning (TCBR). MT and TCBR typically propose textual solutions and both rely on human reference texts for evaluation purposes. Current TCBR evaluation metrics such as precision and recall employ a single human reference but these metrics are misleading when semantically similar texts are expressed with different sets of keywords. MT metrics overcome this challenge with the use of multiple human references. Here, we explore the use of multiple references as opposed to a single reference applied to incident reports from the medical domain. These references are created introspectively from the original dataset using the CBR similarity assumption. Results indicate that TCBR systems evaluated with these new metrics are closer to human judgements. The generated text in TCBR is typically similar in length to the reference since it is a revised form of an actual solution to a similar problem, unlike MT where generated texts can sometimes be significantly shorter. We therefore discovered that some parameters in the MT evaluation measures are not useful for TCBR due to the intrinsic difference in the text generation process.