The LBG-U Method for Vector Quantization – an Improvement over LBGInspired from Neural Networks

  • Authors:
  • Bernd Fritzke

  • Affiliations:
  • Systembiophysik, Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Germany E-mail: fritzke@neuroinformatik.ruhr-uni-bochum.de

  • Venue:
  • Neural Processing Letters
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new vector quantization method (LBG-U) closely related to a particular class of neural network models (growing self-organizingnetworks) is presented. LBG-U consists mainly of repeated runs of thewell-known LBG algorithm. Each time LBG converges, however, a novelmeasure of utility is assigned to each codebook vector. Thereafter, thevector with minimum utility is moved to a new location, LBG is run on theresulting modified codebook until convergence, another vector is moved, andso on. Since a strictly monotonous improvement of the LBG-generatedcodebooks is enforced, it can be proved that LBG-U terminates in a finitenumber of steps. Experiments with artificial data demonstrate significantimprovements in terms of RMSE over LBG combined with only modestly highercomputational costs.