Compact In-Memory Models for Compression of Large Text Databases

  • Authors:
  • Justin Zobel;Hugh E. Williams

  • Affiliations:
  • -;-

  • Venue:
  • SPIRE '99 Proceedings of the String Processing and Information Retrieval Symposium & International Workshop on Groupware
  • Year:
  • 1999

Quantified Score

Hi-index 0.01

Visualization

Abstract

For compression of text databases, semi-static word-based models are a pragmatic choice. Previous experiments have shown that, where there is not sufficient memory to store a full word-based model, encoding rare words as sequences of characters can still allow good compression, while a pure character-based model is poor. We propose a further kind of model that reduces main memory costs: approximate models, in which rare words are represented by similarly-spelt common words and a sequence of edits. We investigate the compression available with different models, including characters, words, word pairs, and edits, and with combinations of these approaches. We show experimentally that carefully chosen combinations of models can improve the compression available in limited memory and greatly reduce overall memory requirements.