Communications of the ACM
What size net gives valid generalization?
Neural Computation
The Strength of Weak Learnability
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
The nature of statistical learning theory
The nature of statistical learning theory
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Extracting rules from neural networks by pruning and hidden-unit splitting
Neural Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Optimal Linear Combination of Neural Networks for Improving Classification Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Symbolic knowledge extraction from trained neural networks: a sound approach
Artificial Intelligence
Rule extraction by successive regularization
Neural Networks
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Ensembling neural networks: many could be better than all
Artificial Intelligence
FERNN: An Algorithm for Fast Extraction of Rules fromNeural Networks
Applied Intelligence
Feature Selection via Discretization
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pose Invariant Face Recognition
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
A Statistics Based Approach for Extracting Priority Rules from Trained Neural Networks
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Classification of seismic signals by integrating ensembles ofneural networks
IEEE Transactions on Signal Processing
Making use of population information in evolutionary artificialneural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Lung cancer cell identification based on artificial neural network ensembles
Artificial Intelligence in Medicine
Are artificial neural networks black boxes?
IEEE Transactions on Neural Networks
ANN-DT: an algorithm for extraction of decision trees from artificial neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Neural network ensemble can significantly improve the generalization ability of neural network based systems. However, its comprehensibility is even worse than that of a single neural network because it comprises a collection of individual neural networks. In this paper, an approach named REFNE is proposed to improve the comprehensibility of trained neural network ensembles that perform classification tasks. REFNE utilizes the trained ensembles to generate instances and then extracts symbolic rules from those instances. It gracefully breaks the ties made by individual neural networks in prediction. It also employs specific discretization scheme, rule form, and fidelity evaluation mechanism. Experiments show that with different configurations, REFNE can extract rules with good fidelity that well explain the function of trained neural network ensembles, or rules with strong generalization ability that are even better than the trained neural network ensembles in prediction.