Topology representing networks
Neural Networks
Self-organizing maps
ACM Computing Surveys (CSUR)
Neural Computation and Self-Organizing Maps; An Introduction
Neural Computation and Self-Organizing Maps; An Introduction
How to make large self-organizing maps for nonvectorial data
Neural Networks - New developments in self-organizing maps
A unified framework for model-based clustering
The Journal of Machine Learning Research
Neural maps in remote sensing image analysis
Neural Networks - 2003 Special issue: Neural network analysis of complex scientific data: Astronomy and geosciences
Magnification Control in Self-Organizing Maps and Neural Gas
Neural Computation
Neural Networks - 2006 Special issue: Advances in self-organizing maps--WSOM'05
Controlling the magnification factor of self-organizing feature maps
Neural Computation
Asymptotic quantization error of continuous signals and the quantization dimension
IEEE Transactions on Information Theory
Self-organizing maps, vector quantization, and mixture modeling
IEEE Transactions on Neural Networks
`Neural-gas' network for vector quantization and its application to time-series prediction
IEEE Transactions on Neural Networks
Asymptotic level density in topological feature maps
IEEE Transactions on Neural Networks
KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence
Learning Highly Structured Manifolds: Harnessing the Power of SOMs
Similarity-Based Clustering
Cartograms, Self-Organizing Maps, and Magnification Control
WSOM '09 Proceedings of the 7th International Workshop on Advances in Self-Organizing Maps
IDA'07 Proceedings of the 7th international conference on Intelligent data analysis
Topographic mapping of large dissimilarity data sets
Neural Computation
Cartogram visualization for nonlinear manifold learning models
Data Mining and Knowledge Discovery
Hi-index | 0.01 |
Neural gas (NG) constitutes a very robust clustering algorithm which can be derived as stochastic gradient descent from a cost function closely connected to the quantization error. In the limit, an NG network samples the underlying data distribution. Thereby, the connection is not linear, rather, it follows a power law with magnification exponent different from the information theoretically optimum one in adaptive map formation. There exists a couple of schemes to explicitly control the exponent such as local learning which leads to a small change of the learning algorithm of NG. Batch NG constitutes a fast alternative optimization scheme for NG vector quantizers which has been derived from the same cost function and which constitutes a fast Newton optimization scheme. It possesses the same magnification factor (different from 1) as standard online NG. In this paper, we propose a method to integrate magnification control by local learning into batch NG. Thereby, the key observation is a link of local learning to an underlying cost function which opens the way towards alternative, e.g.batch optimization schemes. We validate the learning rule derived from this altered cost function in an artificial experimental setting and we demonstrate the benefit of magnification control to sample rare events for a real data set.