Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Vector quantization and signal compression
Vector quantization and signal compression
Pattern Recognition with Fuzzy Objective Function Algorithms
Pattern Recognition with Fuzzy Objective Function Algorithms
Constrained Clustering as an Optimization Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fuzzy vector quantization algorithms and their application in image compression
IEEE Transactions on Image Processing
Fuzzy algorithms for learning vector quantization
IEEE Transactions on Neural Networks
Centroid neural network for unsupervised competitive learning
IEEE Transactions on Neural Networks
Image compression using plane fitting with inter-block prediction
Image and Vision Computing
International Journal of Artificial Intelligence and Soft Computing
Hi-index | 0.08 |
This paper presents the development and evaluation of a new approach toward the design of optimised codebooks by vector quantisation (VQ). A strategy of reinforced learning (RL) is proposed which exploits the advantages offered by fuzzy clustering algorithms, competitive learning and knowledge of training vector and codevector configurations. RL is used as a pre-process before using a conventional VQ algorithm such as the generalised Lloyd algorithm (GLA) or the fuzzy k-means (FKM) algorithm. At each iteration of RL, codevectors move intelligently and intentionally toward an improved optimum codebook design. This is distinct from simulated annealing (SA) and genetic algorithm (GA) techniques in which a random variation is introduced in the movement of the codevectors. The new strategy reduces the possibility that, in the final design, codevectors will be overcrowded in the high density distribution regions of the training vector space, and also that too few codevectors will be settled in low density regions. Experiments demonstrate that this results in a more effective representation of the training vectors by the codevectors and that the final codebook is nearer to the optimal solution in applications such as image compression. It has been found that GLA and FKM yield improved quality of codebook design in this application when RL is used as a pre-process. The investigations have also indicated that RL is insensitive to the selection of both the initial codebook and a learning rate control parameter, which is the only additional parameter introduced by RL from the standard FKM.