Variations on a theme by Huffman

  • Authors:
  • R. Gallager

  • Affiliations:
  • -

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

In honor of the twenty-fifth anniversary of Huffman coding, four new results about Huffman codes are presented. The first result shows that a binary prefix condition code is a Huffman code iff the intermediate and terminal nodes in the code tree can be listed by nonincreasing probability so that each node in the list is adjacent to its sibling. The second result upper bounds the redundancy (expected length minus entropy) of a binary Huffman code byP_{1}+ log_{2}[2(log_{2}e)/e]=P_{1}+0.086, whereP_{1}is the probability of the most likely source letter. The third result shows that one can always leave a codeword of length two unused and still have a redundancy of at most one. The fourth result is a simple algorithm for adapting a Huffman code to slowly varying esthnates of the source probabilities. In essence, one maintains a running count of uses of each node in the code tree and lists the nodes in order of these counts. Whenever the occurrence of a message increases a node count above the count of the next node in the list, the nodes, with their attached subtrees, are interchanged.