Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Competitive learning algorithms for vector quantization
Neural Networks
Self-organizing maps
Faithful representations with topographic maps
Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Lower and Upper Bounds for Misclassification Probability Based on Renyi's Information
Journal of VLSI Signal Processing Systems
Free energy-based competitive learning for self-organizing maps
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Self-organizing maps, vector quantization, and mixture modeling
IEEE Transactions on Neural Networks
Cooperative information maximization with Gaussian activation functions for self-organizing maps
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we propose a new learning method called "self-enhancement learning." In this model, a network enhances its state by itself, and this enhanced state is to be imitated by another state of the network. The word "target" in our model means that a target is created spontaneously by a network, which must try to attain the target. Enhancement is realized by changing the Gaussian width or enhancement parameter. With different enhancement parameters, we can set up the different states of a network. In particular, we set up an enhanced and a relaxed state, and the relaxed state tries to imitate the enhanced state as much as possible. To demonstrate the effectiveness of this method, we apply the self-enhancement learning to the SOMe For this purpose, we introduce collectiveness into an enhanced state in which all neurons collectively respond to input patterns. Then, this enhanced and collective state should be imitated by the other non-enhanced and relaxed state. We applied the method to the Iris problem. Experimental results showed that the V-matrices obtained were significantly similar to those produced by the conventional SOMe However, much better performance could be obtained in terms of quantitative and topological errors. The experimental results suggest the possibility for the self-enhancement learning to be applied to many different neural network models.