A Deterministic Annealing Approach for Parsimonious Design of Piecewise Regression Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
A refined VQ-based image compression method
Fundamenta Informaticae
Distortion-rate bounds for distributed estimation using wireless sensor networks
EURASIP Journal on Advances in Signal Processing
Robust distributed source coder design by deterministic annealing
IEEE Transactions on Signal Processing
Building descriptive and discriminative visual codebook for large-scale image applications
Multimedia Tools and Applications
A Refined VQ-Based Image Compression Method
Fundamenta Informaticae
Decentralized Estimation using distortion sensitive learning vector quantization
Pattern Recognition Letters
Hi-index | 0.00 |
In vector quantization, one approximates an input random vector, Y, by choosing from a finite set of values known as the codebook. We consider a more general problem where one may not have direct access to Y but only to some statistically related random vector X. We observe X and would like to generate an approximation to Y from a codebook of candidate vectors. This operation, called generalized vector quantization (GVQ), is essentially that of quantized estimation. An important special case of GVQ is the problem of noisy source coding wherein a quantized approximation of a vector, Y, is obtained from observation of its noise-corrupted version, X. The optimal GVQ encoder has high complexity. We overcome the complexity barrier by optimizing a structurally-constrained encoder. This challenging optimization task is solved via a probabilistic approach, based on deterministic annealing, which overcomes problems of shallow local minima that trap simpler descent methods. We demonstrate the successful application of our method to the coding of noisy sources.