Competitive learning algorithms for vector quantization
Neural Networks
Neurocomputing
Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning mixture models of spatial coherence
Neural Computation
2005 Special Issue: Unifying cost and information in information-theoretic competitive learning
Neural Networks - 2005 Special issue: IJCNN 2005
Forced information and information loss in information-theoretic competitive learning
AIAP'07 Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications
Feature Discovery by Enhancement and Relaxation of Competitive Units
IDEAL '08 Proceedings of the 9th International Conference on Intelligent Data Engineering and Automated Learning
Enhancing and Relaxing Competitive Units for Feature Discovery
Neural Processing Letters
Combining forward and backward WTA for partially activated neural networks
ISC '07 Proceedings of the 10th IASTED International Conference on Intelligent Systems and Control
Selective enhancement learning in competitive learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
An information-theoretic approach to feature extraction in competitive learning
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Free energy-based competitive learning and minimum information production learning
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Free energy-based competitive learning for self-organizing maps
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Structural enhanced information to detect features in competitive learning
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Partially activated neural networks by controlling information
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Generation of comprehensible representations by supposed maximum information
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part II
Pseudo-network growing for gradual interpretation of input patterns
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Information-theoretic competitive and cooperative learning for self-organizing maps
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
Automatic inference of cabinet approval ratings by information-theoretic competitive learning
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Hi-index | 0.00 |
In this paper, we propose a new information theoretic competitive learning method. We first construct a learning method in single-layered networks, and then we extend it to supervised multi-layered networks. Competitive unit outputs are computed by the inverse of Euclidean distance between input patterns and connection weights. As distance is smaller, competitive unit outputs are stronger. In realizing competition, neither the winner-take-all algorithm nor the lateral inhibition is used. Instead, the new method is based upon mutual information maximization between input patterns and competitive units. In maximizing mutual information, the entropy of competitive units is increased as much as possible. This means that all competitive units must equally be used in our framework. Thus, no under-utilized neurons or dead neurons are generated. When using multi-layered networks, we can improve noise-tolerance performance by unifying information maximization and minimization. We applied our method with single-layered networks to a simple artificial data problem and an actual road classification problem. In both cases, experimental results confirmed that the new method can produce the final solutions almost independently of initial conditions, and classification performance is significantly improved. Then, we used multi-layered networks, and applied them to a character recognition problem and a political data analysis. In these problem, we could show that noise-tolerance performance was improved by decreasing information content on input patterns to certain points.