Another look at the data sparsity problem

  • Authors:
  • Ben Allison;David Guthrie;Louise Guthrie

  • Affiliations:
  • Regent Court, University of Sheffield, Sheffield, UK;Regent Court, University of Sheffield, Sheffield, UK;Regent Court, University of Sheffield, Sheffield, UK

  • Venue:
  • TSD'06 Proceedings of the 9th international conference on Text, Speech and Dialogue
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Performance on a statistical language processing task relies upon accurate information being found in a corpus However, it is known (and this paper will confirm) that many perfectly valid word sequences do not appear in training corpora The percentage of n-grams in a test document which are seen in a training corpus is defined as n-gram coverage, and work in the speech processing community [7] has shown that there is a correlation between n-gram coverage and word error rate (WER) on a speech recognition task Other work (e.g [1]) has shown that increasing training data consistently improves performance of a language processing task This paper extends that work by examining n-gram coverage for far larger corpora, considering a range of document types which vary in their similarity to the training corpora, and experimenting with a broader range of pruning techniques The paper shows that large portions of language will not be represented within even very large corpora It confirms that more data is always better, but how much better is dependent upon a range of factors: the source of that additional data, the source of the test documents, and how the language model is pruned to account for sampling errors and make computation reasonable.