Competitive learning algorithms for vector quantization
Neural Networks
Learning mixture models of spatial coherence
Neural Computation
Linear redundancy reduction learning
Neural Networks
Hyperparameter selection for self-organizing maps
Neural Computation
Clustering ensembles of neural network models
Neural Networks
An introduction to variable and feature selection
The Journal of Machine Learning Research
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
2005 Special issue: A new classifier based on information theoretic learning with unlabeled data
Neural Networks - 2005 Special issue: IJCNN 2005
Information Discriminant Analysis: Feature Extraction with an Information-Theoretic Objective
IEEE Transactions on Pattern Analysis and Machine Intelligence
Towards a theory of early visual processing
Neural Computation
Neural Computation
Neural Computation
Free energy-based competitive learning for self-organizing maps
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Self-organizing maps, vector quantization, and mixture modeling
IEEE Transactions on Neural Networks
Feature selection in MLPs and SVMs based on maximum output information
IEEE Transactions on Neural Networks
Maximization of Mutual Information for Supervised Linear Feature Extraction
IEEE Transactions on Neural Networks
Rival penalized competitive learning for clustering analysis, RBF net, and curve detection
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
In this paper, we propose a new information-theoretic approach to competitive learning and self-organizing maps. We use several information-theoretic measures, such as conditional information and information losses, to extract main features in input patterns. For each competitive unit, conditional information content is used to show how much information on input patterns is contained. In addition, for detecting the importance of each variable, information losses are introduced. The information loss is defined as the difference between information with all input units and information without an input unit. We applied the information loss to conventional competitive learning to show that distinctive features could be extracted by the information loss. Then, we analyzed the self-organizing maps by the conditional information and the information loss. Experimental results showed that main features in input patterns were more clearly detected.