An adaptive incremental LBG for vector quantization

  • Authors:
  • F. Shen; O. Hasegawa

  • Affiliations:
  • Department of Computational Intelligence and System Science, Tokyo Institute of Technology, R2, 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503, Japan;Imaging Science and Engineering Lab., Tokyo Institute of Technology and PRESTO, Japan Science and Technology Agency (JST)

  • Venue:
  • Neural Networks
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This study presents a new vector quantization method that generates codewords incrementally. New codewords are inserted in regions of the input vector space where the distortion error is highest until the desired number of codewords (or a distortion error threshold) is achieved. Adoption of the adaptive distance function greatly increases the proposed method's performance. During the incremental process, a removal-insertion technique is used to fine-tune the codebook to make the proposed method independent of initial conditions. The proposed method works better than some recently published efficient algorithms such as Enhanced LBG (Patane, & Russo, 2001) for traditional tasks: with fixed number of codewords, to find a suitable codebook to minimize distortion error. The proposed method can also be used for new tasks that are insoluble using traditional methods: with fixed distortion error, to minimize the number of codewords and find a suitable codebook. Experiments for some image compression problems indicate that the proposed method works well.