Neural computation and self-organizing maps: an introduction
Neural computation and self-organizing maps: an introduction
Dynamic cell structure learns perfectly topology preserving map
Neural Computation
Self-organizing maps
Faithful Representations and Topographic Maps: From Distortion- to Information-Based Self-Organization
Intrinsic Dimension Estimation of Data: An Approach Based on Grassberger–Procaccia's Algorithm
Neural Processing Letters
Asymptotic Level Density of the Elastic Net Self-Organizing Feature Map
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Vector Quantization by Optimal Neural Gas
ICANN '97 Proceedings of the 7th International Conference on Artificial Neural Networks
Neural maps in remote sensing image analysis
Neural Networks - 2003 Special issue: Neural network analysis of complex scientific data: Astronomy and geosciences
Controlling the magnification factor of self-organizing feature maps
Neural Computation
Asymptotic quantization error of continuous signals and the quantization dimension
IEEE Transactions on Information Theory
Asymptotic level density for a class of vector quantization processes
IEEE Transactions on Neural Networks
Code vector density in topographic mappings: Scalar case
IEEE Transactions on Neural Networks
`Neural-gas' network for vector quantization and its application to time-series prediction
IEEE Transactions on Neural Networks
Asymptotic level density in topological feature maps
IEEE Transactions on Neural Networks
Magnification Control in Self-Organizing Maps and Neural Gas
Neural Computation
Perspectives of self-adapted self-organizing clustering in organic computing
BioADIT'06 Proceedings of the Second international conference on Biologically Inspired Approaches to Advanced Information Technology
Investigation of topographical stability of the concave and convex self-organizing map variant
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Hi-index | 0.01 |
An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically confirming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.