Competitive learning algorithms for vector quantization
Neural Networks
Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
On learning the past tenses of english verbs
Parallel distributed processing
Faithful Representations and Topographic Maps: From Distortion- to Information-Based Self-Organization
Hi-index | 0.00 |
In this paper, we introduce costs in the framework of information maximization and try to maximize the ratio of information to its associated cost. We have shown that competitive learning is realized by maximizing mutual information between input patterns and competitive units. One shortcoming of the method is that maximizing information does not necessarily produce representations faithful to input patterns. Information maximizing primarily focuses on some parts of input patterns used to distinguish between patterns. Thus, we introduce the ratio of information to its cost that represents distance between input patterns and connection weights. By minimizing the ratio, final connection weights reflect well input patterns. We applied unsupervised information maximization to a voting attitude problem and supervised learning to a chemical data analysis. Experimental results confirmed that by minimizing the ratio, the cost is decreased with better generalization performance.