Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Competitive learning algorithms for vector quantization
Neural Networks
A Bayesian analysis of self-organizing maps
Neural Computation
Hyperparameter selection for self-organizing maps
Neural Computation
Self-organizing maps
A unifying objective function for topographic mappings
Neural Computation
Deterministic annealing EM algorithm
Neural Networks
Density estimation by mixture models with smoothing priors
Neural Computation
Faithful representations with topographic maps
Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Clustering ensembles of neural network models
Neural Networks
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
Lower and Upper Bounds for Misclassification Probability Based on Renyi's Information
Journal of VLSI Signal Processing Systems
Free energy-based competitive learning for self-organizing maps
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
A new k-winners-take-all neural network and its array architecture
IEEE Transactions on Neural Networks
Clustering of the self-organizing map
IEEE Transactions on Neural Networks
K-winner machines for pattern classification
IEEE Transactions on Neural Networks
Self-organizing maps, vector quantization, and mixture modeling
IEEE Transactions on Neural Networks
Self-splitting competitive learning: a new on-line clustering paradigm
IEEE Transactions on Neural Networks
Cooperative information maximization with Gaussian activation functions for self-organizing maps
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we propose a new self-supervised learning method for competitive learning as well as self-organizing maps. In this model, a network enhances its state by itself, and this enhanced state is to be imitated by another state of the network. We set up an enhanced and a relaxed state, and the relaxed state tries to imitate the enhanced state as much as possible by minimizing the free energy. To demonstrate the effectiveness of this method, we apply information enhancement learning to the SOM. For this purpose, we introduce collectiveness, in which all neurons collectively respond to input patterns, into an enhanced state. Then, this enhanced and collective state should be imitated by the other non-enhanced and relaxed state. We applied the method to an artificial data and three data from the well-known machine learning database. Experimental results showed that the U-matrices obtained were significantly similar to those produced by the conventional SOM. However, better performance could be obtained in terms of quantitative and topological errors. The experimental results suggest the possibility for self-supervised learning to be applied to many different neural network models.