Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Boosting a weak learning algorithm by majority
Information and Computation
Symbolic Interpretation of Artificial Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Applied Soft Computing
Adaptive mixtures of local experts
Neural Computation
Are artificial neural networks black boxes?
IEEE Transactions on Neural Networks
Extracting M-of-N rules from trained neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most presently available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction from trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box. Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable to human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied critical real world problems.