Competitive learning algorithms for vector quantization
Neural Networks
Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
Hi-index | 0.00 |
In this paper, we propose free energy-based competitive learning and its computational method called minimum information production learning. The free energy has been introduced to overcome the fundamental problem of information-theoretic competitive learning, that is, fidelity to input patterns. Mutual information maximization for competitive learning so far developed for competitive learning is unconstrained maximization. This means that final connection weights are not always faithful to input patterns. The free energy with built-in cost functions has been very useful in dealing with faithful representations. However, in the free energy, we have some cases where mutual information is degraded in the later stage of learning. The new computational method of minimum information production learning has been introduced to stabilize learning in the later stage of learning. We applied the method to the famous Iris problem and a student survey. In both cases, we succeeded in increasing the performance in terms of training and generalization errors. In addition, we found that when information could not be increased, minimum information production learning made it possible to stabilize learning processes.