Information Processing Letters
An efficient layered data compression scheme with constraint analysis
Mathematics and Computers in Simulation
New bounds on D-ary optimal codes
Information Processing Letters
A fast dynamic compression scheme for natural language texts
Computers & Mathematics with Applications
A fast and efficient nearly-optimal adaptive Fano coding scheme
Information Sciences: an International Journal
Exact and approximation algorithms for error-detecting even codes
Theoretical Computer Science
The S-digraph optimization problem and the greedy algorithm
Discrete Optimization
Data Compressor for VQ Index Tables
Fundamenta Informaticae
Minimax trees in linear time with applications
European Journal of Combinatorics
Most burrows-wheeler based compressors are not optimal
CPM'07 Proceedings of the 18th annual conference on Combinatorial Pattern Matching
Hi-index | 754.84 |
In honor of the twenty-fifth anniversary of Huffman coding, four new results about Huffman codes are presented. The first result shows that a binary prefix condition code is a Huffman code iff the intermediate and terminal nodes in the code tree can be listed by nonincreasing probability so that each node in the list is adjacent to its sibling. The second result upper bounds the redundancy (expected length minus entropy) of a binary Huffman code byP_{1}+ log_{2}[2(log_{2}e)/e]=P_{1}+0.086, whereP_{1}is the probability of the most likely source letter. The third result shows that one can always leave a codeword of length two unused and still have a redundancy of at most one. The fourth result is a simple algorithm for adapting a Huffman code to slowly varying esthnates of the source probabilities. In essence, one maintains a running count of uses of each node in the code tree and lists the nodes in order of these counts. Whenever the occurrence of a message increases a node count above the count of the next node in the list, the nodes, with their attached subtrees, are interchanged.