Learning preferences for referring expression generation: effects of domain, language and algorithm

  • Authors:
  • Ruud Koolen;Emiel Krahmer;Mariët Theune

  • Affiliations:
  • Tilburg University, The Netherlands;Tilburg University, The Netherlands;University of Twente, The Netherlands

  • Venue:
  • INLG '12 Proceedings of the Seventh International Natural Language Generation Conference
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

One important subtask of Referring Expression Generation (REG) algorithms is to select the attributes in a definite description for a given object. In this paper, we study how much training data is required for algorithms to do this properly. We compare two REG algorithms in terms of their performance: the classic Incremental Algorithm and the more recent Graph algorithm. Both rely on a notion of preferred attributes that can be learned from human descriptions. In our experiments, preferences are learned from training sets that vary in size, in two domains and languages. The results show that depending on the algorithm and the complexity of the domain, training on a handful of descriptions can already lead to a performance that is not significantly different from training on a much larger data set.