A general-purpose compression scheme for large collections
ACM Transactions on Information Systems (TOIS)
Operational requirements for scalable search systems
CIKM '03 Proceedings of the twelfth international conference on Information and knowledge management
Hi-index | 0.01 |
For compression of text databases, semi-static word-based models are a pragmatic choice. Previous experiments have shown that, where there is not sufficient memory to store a full word-based model, encoding rare words as sequences of characters can still allow good compression, while a pure character-based model is poor. We propose a further kind of model that reduces main memory costs: approximate models, in which rare words are represented by similarly-spelt common words and a sequence of edits. We investigate the compression available with different models, including characters, words, word pairs, and edits, and with combinations of these approaches. We show experimentally that carefully chosen combinations of models can improve the compression available in limited memory and greatly reduce overall memory requirements.