Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Structural learning with forgetting
Neural Networks
Global inhibition for selecting modes of attention
Neural Networks
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
A neural global workspace model for conscious attention
Neural Networks - 1997 special issue on neural networks for consciousness
Rule extraction by successive regularization
Neural Networks
A methodology to explain neural network classification
Neural Networks
An introduction to variable and feature selection
The Journal of Machine Learning Research
Grafting: fast, incremental feature selection by gradient descent in function space
The Journal of Machine Learning Research
Variable selection using svm based criteria
The Journal of Machine Learning Research
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
A model of active visual search with object-based attention guiding scan paths
Neural Networks - 2004 Special issue Vision and brain
2005 Special Issue: The interaction of attention and emotion
Neural Networks - Special issue: Emotion and brain
Feature Discovery by Enhancement and Relaxation of Competitive Units
IDEAL '08 Proceedings of the 9th International Conference on Intelligent Data Engineering and Automated Learning
Enhancing and Relaxing Competitive Units for Feature Discovery
Neural Processing Letters
Free energy-based competitive learning for self-organizing maps
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Extraction of rules from artificial neural networks for nonlinear regression
IEEE Transactions on Neural Networks
Feature selection in MLPs and SVMs based on maximum output information
IEEE Transactions on Neural Networks
Using mutual information for selecting features in supervised neural net learning
IEEE Transactions on Neural Networks
Artificial neural networks for feature extraction and multivariate data projection
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we propose a new information-theoretic method, called ''selective information enhancement learning,'' to explicitly interpret final representations created by learning. More specifically, we aim to make class boundaries obtained by learning as overt as possible by picking up the small number of important variables. The variable selection is performed by information enhancement in which mutual information between input patterns and competitive units is measured, while focusing upon a specific input variable. When this information is larger, the importance of the variable is higher. With selected and important variables, a network is retrained by free energy minimization. With this free energy minimization, we can obtain connection weights by considering the importance of specific variables. We applied the method to an artificial data problem, the Senate problem and the voting attitude problem, all of which are easily obtained for purposes of reproduction. Experimental results for all three problems showed that clear class boundaries could be obtained with a smaller number of variables. In addition, we could observe that a smaller number of input variables tended to have the majority of information on input patterns. This tendency became more explicit when the network size was large.