Clustering for edge-cost minimization (extended abstract)
STOC '00 Proceedings of the thirty-second annual ACM symposium on Theory of computing
Concept Decompositions for Large Sparse Text Data Using Clustering
Machine Learning
Feature Weighting in k-Means Clustering
Machine Learning
Probability in the Engineering and Informational Sciences
Feedback quantization for linear precoded spatial multiplexing
EURASIP Journal on Advances in Signal Processing
Filtering relocations on a Delaunay triangulation
SGP '09 Proceedings of the Symposium on Geometry Processing
Quantization and clustering with Bregman divergences
Journal of Multivariate Analysis
Convergence of Distributed Asynchronous Learning Vector Quantization Algorithms
The Journal of Machine Learning Research
Fundamenta Informaticae
Hi-index | 754.84 |
The generalized Lloyd algorithm for vector quantizer design is analyzed as a descent algorithm for nonlinear programming. A broad class of convex distortion functions is considered and any input distribution that has no singular-continuous part is allowed. A well-known convergence theorem is applied to show that iterative applications of the algorithm produce a sequence of quantizers that approaches the set of fixed-point quantizers. The methods of the theorem are extended to sequences of algorithms, yielding results on the behavior of the algorithm when an unknown distribution is approximated by a training sequence of observations. It is shown that as the length of the training sequence grows large that 1) fixed-point quantizers for the training sequence approach the set of fixed-point quantizers for the true distribution, and 2) limiting quantizers produced by the algorithm with the training sequence distribution perform no worse than limiting quantizers produced by the algorithm with the true distribution.