Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Competitive learning algorithms for vector quantization
Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Structural learning with forgetting
Neural Networks
Computational Statistics & Data Analysis - Special issue on classification
Deterministic annealing EM algorithm
Neural Networks
Template-based procedures for neural network interpretation
Neural Networks
Faithful representations with topographic maps
Neural Networks
Rule extraction by successive regularization
Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Self-Organizing Maps
Learning from Examples with Information Theoretic Criteria
Journal of VLSI Signal Processing Systems
A methodology to explain neural network classification
Neural Networks
An introduction to variable and feature selection
The Journal of Machine Learning Research
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
Vector quantization using information theoretic concepts
Natural Computing: an international journal
Magnification Control in Self-Organizing Maps and Neural Gas
Neural Computation
Information Discriminant Analysis: Feature Extraction with an Information-Theoretic Objective
IEEE Transactions on Pattern Analysis and Machine Intelligence
Controlling the magnification factor of self-organizing feature maps
Neural Computation
Feature Discovery by Enhancement and Relaxation of Competitive Units
IDEAL '08 Proceedings of the 9th International Conference on Intelligent Data Engineering and Automated Learning
Some Theoretical Aspects of the Neural Gas Vector Quantizer
Similarity-Based Clustering
Free energy-based competitive learning for self-organizing maps
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Vector quantization by deterministic annealing
IEEE Transactions on Information Theory
Clustering of the self-organizing map
IEEE Transactions on Neural Networks
Self-organizing maps, vector quantization, and mixture modeling
IEEE Transactions on Neural Networks
Self-splitting competitive learning: a new on-line clustering paradigm
IEEE Transactions on Neural Networks
Branching competitive learning Network: A novel self-creating model
IEEE Transactions on Neural Networks
Survey of clustering algorithms
IEEE Transactions on Neural Networks
Explicit Magnification Control of Self-Organizing Maps for “Forbidden” Data
IEEE Transactions on Neural Networks
Maximization of Mutual Information for Supervised Linear Feature Extraction
IEEE Transactions on Neural Networks
`Neural-gas' network for vector quantization and its application to time-series prediction
IEEE Transactions on Neural Networks
Rival penalized competitive learning for clustering analysis, RBF net, and curve detection
IEEE Transactions on Neural Networks
Artificial neural networks for feature extraction and multivariate data projection
IEEE Transactions on Neural Networks
Explicit class structure by weighted cooperative learning
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Hi-index | 0.01 |
In this paper, we propose a new information-theoretic method called ''enhancement learning'' to interpret the configuration of competitive networks. When applied to self-organizing maps, the method aims to make clusters of data easier to see at different detail levels. In enhancement learning, connection weights are actively modified to enhance competitive units for better interpretation, at the expense of quantization errors in the extreme case, because error minimization is not the main target of enhancement learning. After modifying connection weights, enhancement learning can generate as many network configurations as possible just by our changing the enhancement parameter. A useful way to combine the information from the several network configurations is to extract features common to all configurations and specific to some configurations. In addition, we propose relative information, namely, mutual information that takes into consideration the corresponding errors between input patterns and connection weights. The relative information provides a guideline by which we can pay much attention to a particular network configuration among many possibilities. We applied the method to an artificial data problem, the well-known Iris problem, Haberman data and a cancer data problem. In all the problems, experimental results confirmed that, as the enhancement parameter is increased, multiple configurations are generated, in which the number of boundaries in terms of U-matrices and component planes could be increased. In addition, we could see that relative information was effective in suggesting a possibility to detect the appropriate number of clusters.