Competitive learning algorithms for vector quantization
Neural Networks
Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
Self-enhancement learning: self-supervised and target-creating learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Selective enhancement learning in competitive learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Hi-index | 0.00 |
In this paper, we propose a new information-theoretic approach to self-organizing maps. We have so far proposed mutual information maximization to realize competitive learning. However, the computational complexity and fidelity to input patterns become serious when we try to apply it to self-organizing maps. To overcome these short-comings, we introduce a free energy similar to that of statistical mechanics. By the free energy, we need not directly compute mutual information to simplify greatly computational procedures. In addition, in the free energy, errors between targets and outputs are naturally built in. This property can solve the problem of fidelity to input patterns of mutual information maximization. In the free energy, we can increase mutual information, taking due attention to errors between targets and connection weights. To demonstrate the performance of our free energy method, we applied the method to the famous Iris Problem. Experimental results showed that feature maps obtained by free energy minimization was significantly similar to those by the conventional SOM.