The entropy of English using PPM-based models

  • Authors:
  • W. J. Teahan;J. G. Cleary

  • Affiliations:
  • -;-

  • Venue:
  • DCC '96 Proceedings of the Conference on Data Compression
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

The purpose of this paper is to show that the difference between the best machine models and human models is smaller than might be indicated by the previous results. This follows from a number of observations: firstly, the original human experiments used only 27 character English (letters plus space) against full 128 character ASCII text for most computer experiments; secondly, using large amounts of priming text substantially improves the PPM's performance; and thirdly, the PPM algorithm can be modified to perform better for English text. The result of this is a machine performance down to 1.46 bit per character. The problem of estimating the entropy of English is discussed. The importance of training text for PPM is demonstrated, showing that its performance can be improved by "adjusting" the alphabet used. The results based on these improvements are then given, with compression down to 1.46 bpc.