Learning a multi-dimensional companding function for lossy source coding

  • Authors:
  • Shin-ichi Maeda;Shin Ishii

  • Affiliations:
  • Kyoto University, Gokasyo, Uji, Kyoto, 611-0011, Japan;Kyoto University, Gokasyo, Uji, Kyoto, 611-0011, Japan

  • Venue:
  • Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although the importance of lossy source coding has been growing, the general and practical methodology for its design has not been completely resolved. The well-known vector quantization (VQ) can represent any fixed-length lossy source coding, but requires too much computation resource. Companding vector quantization (CVQ) can reduce the complexity of non-structured VQ by replacing vector quantization with a set of scalar quantizations and can represent a wide class of practically useful VQs. Although an analytical derivation of optimal CVQ is difficult except for very limited cases, optimization using data samples can be performed instead. Here we propose a CVQ optimization method, which includes bit allocation by a newly derived distortion formula as a generalization of Bennett's formula, and test its validity. We applied the method to transform coding and compared the performance of our CVQ with those of Karhunen-Loeve transformation (KLT)-based coding and non-structured VQ. As a consequence, we found that our trained CVQ outperforms not only KLT-based coding but also non-structured VQ in the case of high bit-rate coding of linear mixtures of uniform sources. We also found that trained CVQ even outperformed KLT-based coding in the low bit-rate coding of a Gaussian source. To highlight the advantages of our approach, we also discuss the degradation of non-structured VQ and the limitations of theoretical analyses which are valid for high bit-rate coding.