Simulated annealing: theory and applications
Simulated annealing: theory and applications
Fast codebook generation algorithm for vector quantization of images
Pattern Recognition Letters
Vector quantization and signal compression
Vector quantization and signal compression
Elements of information theory
Elements of information theory
Iterative Detection: Adaptivity, Complexity Reduction, and Applications
Iterative Detection: Adaptivity, Complexity Reduction, and Applications
An efficient vector quantizer providing globally optimal solutions
IEEE Transactions on Signal Processing
Termination and continuity of greedy growing for tree-structured vector quantizers
IEEE Transactions on Information Theory
Advances in residual vector quantization: a review
IEEE Transactions on Image Processing
Comparison of different methods of classification in subband coding of images
IEEE Transactions on Image Processing
Image coding using wavelet transforms and entropy-constrained trellis-coded quantization
IEEE Transactions on Image Processing
Hi-index | 0.00 |
This paper presents a reduced-complexity deterministic annealing (DA) approach for vector quantizer (VQ) design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use the derived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN) codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.