Communications of the ACM
What size net gives valid generalization?
Neural Computation
The Strength of Weak Learnability
Machine Learning
Original Contribution: Stacked generalization
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
The nature of statistical learning theory
The nature of statistical learning theory
Boosting a weak learning algorithm by majority
Information and Computation
Machine Learning
Extracting rules from neural networks by pruning and hidden-unit splitting
Neural Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Optimal Linear Combination of Neural Networks for Improving Classification Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Symbolic knowledge extraction from trained neural networks: a sound approach
Artificial Intelligence
Rule extraction by successive regularization
Neural Networks
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Ensembling neural networks: many could be better than all
Artificial Intelligence
FERNN: An Algorithm for Fast Extraction of Rules fromNeural Networks
Applied Intelligence
Feature Selection via Discretization
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pose Invariant Face Recognition
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
A Statistics Based Approach for Extracting Priority Rules from Trained Neural Networks
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Classification of seismic signals by integrating ensembles ofneural networks
IEEE Transactions on Signal Processing
Making use of population information in evolutionary artificialneural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Lung cancer cell identification based on artificial neural network ensembles
Artificial Intelligence in Medicine
Are artificial neural networks black boxes?
IEEE Transactions on Neural Networks
ANN-DT: an algorithm for extraction of decision trees from artificial neural networks
IEEE Transactions on Neural Networks
Rule extraction: using neural networks or for neural networks?
Journal of Computer Science and Technology
Rule-Based Learning Systems for Support Vector Machines
Neural Processing Letters
Connectionist modal logic: Representing modalities in neural networks
Theoretical Computer Science
A novel Supervised Instance Selection algorithm
International Journal of Business Intelligence and Data Mining
Constructing Classification Rules Based on SVR and Its Derivative Characteristics
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
Seeing the Forest Through the Trees: Learning a Comprehensible Model from an Ensemble
ECML '07 Proceedings of the 18th European conference on Machine Learning
A New Approach to Division of Attribute Space for SVR Based Classification Rule Extraction
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Mining extremely small data sets with application to software reuse
Software—Practice & Experience
Exploratory undersampling for class-imbalance learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A new approach to symbolic classification rule extraction based on SVM
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Seeing the forest through the trees: learning a comprehensible model from a first order ensemble
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Application of rough sets theory in air quality assessment
RSKT'10 Proceedings of the 5th international conference on Rough set and knowledge technology
A novel CBR system for numeric prediction
Information Sciences: an International Journal
Comparative analysis of data mining methods for bankruptcy prediction
Decision Support Systems
Neighbor line-based locally linear embedding
PAKDD'06 Proceedings of the 10th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
FSKD'05 Proceedings of the Second international conference on Fuzzy Systems and Knowledge Discovery - Volume Part II
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
CCC: classifier combination via classifier
ICIC'11 Proceedings of the 7th international conference on Advanced Intelligent Computing
Multi-label ensemble based on variable pairwise constraint projection
Information Sciences: an International Journal
Hi-index | 0.00 |
Neural network ensemble can significantly improve the generalization ability of neural network based systems. However, its comprehensibility is even worse than that of a single neural network because it comprises a collection of individual neural networks. In this paper, an approach named REFNE is proposed to improve the comprehensibility of trained neural network ensembles that perform classification tasks. REFNE utilizes the trained ensembles to generate instances and then extracts symbolic rules from those instances. It gracefully breaks the ties made by individual neural networks in prediction. It also employs specific discretization scheme, rule form, and fidelity evaluation mechanism. Experiments show that with different configurations, REFNE can extract rules with good fidelity that well explain the function of trained neural network ensembles, or rules with strong generalization ability that are even better than the trained neural network ensembles in prediction.