Prediction by Grammatical Match

  • Authors:
  • J. Michael Lake

  • Affiliations:
  • -

  • Venue:
  • DCC '00 Proceedings of the Conference on Data Compression
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present Prediction by Grammatical Match (PGM), a new general-purpose adaptive text compression framework successfully blending finite-context and general context-free models. A PGM compressor operates incrementally by parsing a prefix of the input text, generating a set of analyses; these analyses are scored according to encoding cost, the cheapest is selected, and sent through an order k PPM encoder.PGM's primary innovations include the use of a generalized PPM in selection and coding; the simultaneous use of multiple context-free grammars; the use of lexical left-corner derivations (LLCD); and an aggressive algorithm for constructing an LR (0) parsable metalanguage for LLCDs. LLCDs are a hybrid of bottom-up and top-down descriptions that represent grammatical information implicitly with each lexeme.The constructed metalanguage extends this to include explicit top-down steps to resolve local ambiguities in at most one strictly grammatical symbol. These properties combine to deliver excellent compression. On a test corpus of about 1 Mb of Scheme program text, PGM with a generic Scheme grammar required about 26% fewer bits than PPM to represent the entire corpus, with reductions on individual files reaching as high as 55%. In addition, PGM enriches the time-compression-memory tradeoff options, since a low order PGM can achieve bpc rates comparable to high order PPMs at considerable savings in space. PGM compression operates in expected linear time and space for many kinds of grammars. PGM decompression operates in guaranteed linear time and space.