Communications of the ACM
Introduction to the theory of neural computation
Introduction to the theory of neural computation
Improving the convergence of the back-propagation algorithm
Neural Networks
C4.5: programs for machine learning
C4.5: programs for machine learning
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Automated knowledge acquisition
Automated knowledge acquisition
Symbolic Representation of Neural Networks
Computer - Special issue: neural computing: companion issue to Spring 1996 IEEE Computational Science & Engineering
A penalty-function approach for pruning feedforward neural networks
Neural Computation
Extracting rules from neural networks by pruning and hidden-unit splitting
Neural Computation
Second Order Derivatives for Network Pruning: Optimal Brain Surgeon
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Global Data Analysis and the Fragmentation Problem in Decision Tree Induction
ECML '97 Proceedings of the 9th European Conference on Machine Learning
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
An iterative pruning algorithm for feedforward neural networks
IEEE Transactions on Neural Networks
Extracting symbolic rules from trained neural network ensembles
AI Communications - Special issue on Artificial intelligence advances in China
Visualising the internal components of networks
IEA/AIE'2003 Proceedings of the 16th international conference on Developments in applied artificial intelligence
Extracting symbolic rules from trained neural network ensembles
AI Communications - Artificial Intelligence Advances in China
Hierarchical classifier with overlapping class groups
Expert Systems with Applications: An International Journal
Machine learning: a review of classification and combining techniques
Artificial Intelligence Review
Hierarchical Rules for a Hierarchical Classifier
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part I
Rule extraction from trained adaptive neural networks using artificial immune systems
Expert Systems with Applications: An International Journal
ACS'08 Proceedings of the 8th conference on Applied computer scince
Extracting rules for classification problems: AIS based approach
Expert Systems with Applications: An International Journal
Improving rule extraction from neural networks by modifying hidden layer representations
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A hierarchical classifier with growing neural gas clustering
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
MICAI'10 Proceedings of the 9th Mexican international conference on Artificial intelligence conference on Advances in soft computing: Part II
A probabilistic fuzzy approach to modeling nonlinear systems
Neurocomputing
Interpreting hidden neurons in boolean constructive neural networks
IDEAL'11 Proceedings of the 12th international conference on Intelligent data engineering and automated learning
PPAM'05 Proceedings of the 6th international conference on Parallel Processing and Applied Mathematics
Autonomous recovery from hostile code insertion using distributed reflection
Cognitive Systems Research
Hi-index | 0.00 |
Before symbolic rules are extracted from a trainedneural network, the network is usually pruned so as to obtainmore concise rules. Typical pruning algorithms requireretraining the network which incurs additional cost. This paperpresents FERNN, a fast method for extracting rules from trainedneural networks without network retraining. Given a fullyconnected trained feedforward network with a single hiddenlayer, FERNN first identifies the relevant hidden units bycomputing their information gains. For each relevant hiddenunit, its activation values is divided into two subintervalssuch that the information gain is maximized. FERNN finds the setof relevant network connections from the input units to thishidden unit by checking the magnitudes of their weights. Theconnections with large weights are identified as relevant.Finally, FERNN generates rules that distinguish the twosubintervals of the hidden activation values in terms of thenetwork inputs. Experimental results show that the size and thepredictive accuracy of the tree generated are comparable tothose extracted by another method which prunes and retrains thenetwork.