Combining neural networks and the wavelet transform for image compression

  • Authors:
  • Tracy Denk;Keshab K. Parhi;Vladimir Cherkassky

  • Affiliations:
  • Dept. of Electrical Engineering, University of Minnesota, Minneapolis, MN;Dept. of Electrical Engineering, University of Minnesota, Minneapolis, MN;Dept. of Electrical Engineering, University of Minnesota, Minneapolis, MN

  • Venue:
  • ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: plenary, special, audio, underwater acoustics, VLSI, neural networks - Volume I
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new image compression scheme which uses the wavelet transform and neural networks. Image compression is performed in three steps. First, the image is decomposed at different scales, using the wavelet transform, to obtain an orthogonal wavelet representation of the image. Second, the wavelet coefficients are divided into vectors, which are projected onto a subspace using a neural network. The number of coefficients required to represent the vector in the subspace is less than the number of coefficients required to represent the original vector, resulting in data compression. Finally, the coefficients which project the vectors of wavelet coefficients onto the subspace are quantized and entropy coded. The advantages of various quantization schemes are discussed. Using these techniques, we obtain 32 to I compression at peak SNR of 29 dB for the "lenna" image.