Competitive learning algorithms for vector quantization
Neural Networks
Robust image association by recurrent neural subnetworks
Neural Processing Letters
A new family of multivalued networks
Neural Networks
Expansive and Competitive Learning for Vector Quantization
Neural Processing Letters
A Recurrent Multivalued Neural Network for the N-Queens Problem
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Nonlinear vector prediction using feed-forward neural networks
IEEE Transactions on Image Processing
A modular neural network vector predictor for predictive image coding
IEEE Transactions on Image Processing
Use of nonlinear principal component analysis and vector quantization for image coding
IEEE Transactions on Image Processing
Optimally adaptive transform coding
IEEE Transactions on Image Processing
Adaptive combination of PCA and VQ networks
IEEE Transactions on Neural Networks
Image compression by self-organized Kohonen map
IEEE Transactions on Neural Networks
KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence
Two pages graph layout via recurrent multivalued neural networks
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
K-pages graph drawing with multivalued neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Shortest common superstring problem with discrete neural networks
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
Hi-index | 0.00 |
In this work we propose a recurrent multivalued network, generalizing Hopfield's model, which can be interpreted as a vector quantifier. We explain the model and establish a relation between vector quantization and sum-of-squares clustering. To test the efficiency of this model as vector quantifier, we apply this new technique to image compression. Two well-known images are used as benchmark, allowing us to compare our model to standard competitive learning. In our simulations, our new technique clearly outperforms the classical algorithm for vector quantization, achieving not only a better distortion rate, but even reducing drastically the computational time.