On the limits of sentence compression by deletion

  • Authors:
  • Erwin Marsi;Emiel Krahmer;Iris Hendrickx;Walter Daelemans

  • Affiliations:
  • Tilburg University, Tilburg, The Netherlands;Tilburg University, Tilburg, The Netherlands;Antwerp University, Antwerpen, Belgium;Antwerp University, Antwerpen, Belgium

  • Venue:
  • Empirical methods in natural language generation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Data-driven approaches to sentence compression define the task as dropping any subset of words from the input sentence while retaining important information and grammaticality. We show that only 16% of the observed compressed sentences in the domain of subtitling can be accounted for in this way. We argue that this is partly due to the lack of appropriate evaluation material and estimate that a deletion model is in fact compatible with approximately 55% of the observed data. We analyse the remaining cases in which deletion only failed to provide the required level of compression. We conclude that in those cases word order changes and paraphrasing are crucial. We therefore argue for more elaborate sentence compression models which include paraphrasing and word reordering. We report preliminary results of applying a recently proposed more powerful compression model in the context of subtitling for Dutch.