Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Structural learning with forgetting
Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
A methodology to explain neural network classification
Neural Networks
An introduction to variable and feature selection
The Journal of Machine Learning Research
An extensive empirical study of feature selection metrics for text classification
The Journal of Machine Learning Research
Grafting: fast, incremental feature selection by gradient descent in function space
The Journal of Machine Learning Research
Variable selection using svm based criteria
The Journal of Machine Learning Research
Overfitting in making comparisons between variable selection methods
The Journal of Machine Learning Research
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
Free energy-based competitive learning for self-organizing maps
AIA '08 Proceedings of the 26th IASTED International Conference on Artificial Intelligence and Applications
Input feature selection for classification problems
IEEE Transactions on Neural Networks
Extraction of rules from artificial neural networks for nonlinear regression
IEEE Transactions on Neural Networks
Artificial neural networks for feature extraction and multivariate data projection
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, we propose a new information-theoretic method to explicitly interpret final representations created by learning. The new method, called "selective enhancement learning," aims at producing explicit representation with fewer input variables. The variable selection is performed by information enhancement in which with a specific and enhanced variable, mutual information, is measured. As this information grows larger, the importance of the variable increases. With selected and important variables, a network is retrained by free energy minimization. In this free energy minimization, we can obtain connection weights by considering the importance of specific variables. When we applied the method to the Senate problem, experimental results showed that clear representations could be obtained with a smaller number of variables. This tendency was more explicit when the network was large.