Global convergence and empirical consistency of the generalized Lloyd algorithm

  • Authors:
  • M J Sabin;R M Gray

  • Affiliations:
  • -;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 1986

Quantified Score

Hi-index 754.84

Visualization

Abstract

The generalized Lloyd algorithm for vector quantizer design is analyzed as a descent algorithm for nonlinear programming. A broad class of convex distortion functions is considered and any input distribution that has no singular-continuous part is allowed. A well-known convergence theorem is applied to show that iterative applications of the algorithm produce a sequence of quantizers that approaches the set of fixed-point quantizers. The methods of the theorem are extended to sequences of algorithms, yielding results on the behavior of the algorithm when an unknown distribution is approximated by a training sequence of observations. It is shown that as the length of the training sequence grows large that 1) fixed-point quantizers for the training sequence approach the set of fixed-point quantizers for the true distribution, and 2) limiting quantizers produced by the algorithm with the training sequence distribution perform no worse than limiting quantizers produced by the algorithm with the true distribution.