Multilayer feedforward networks are universal approximators
Neural Networks
Skeletonization: a technique for trimming the fat from a network via relevance assessment
Advances in neural information processing systems 1
Creating artificial neural networks that generalize
Neural Networks
Advances in neural information processing systems 2
A Simple Neural Network Pruning Algorithm with Application to Filter Synthesis
Neural Processing Letters
Second Order Derivatives for Network Pruning: Optimal Brain Surgeon
Advances in Neural Information Processing Systems 5, [NIPS Conference]
IEEE Transactions on Neural Networks
A new pruning heuristic based on variance analysis of sensitivity information
IEEE Transactions on Neural Networks
A simple procedure for pruning back-propagation trained neural networks
IEEE Transactions on Neural Networks
Sensitivity analysis of multilayer perceptron with differentiable activation functions
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Computation of Madalines' Sensitivity to Input and Weight Perturbations
Neural Computation
Two-phase construction of multilayer perceptrons using information theory
IEEE Transactions on Neural Networks
A novel pruning algorithm for self-organizing neural network
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Hidden node pruning of multilayer perceptrons based on redundancy reduction
ICHIT'11 Proceedings of the 5th international conference on Convergence and hybrid information technology
A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems
Neural Processing Letters
Reverse Engineering the Neural Networks for Rule Extraction in Classification Problems
Neural Processing Letters
Advances in Artificial Neural Systems
Hi-index | 0.01 |
In the design of neural networks, how to choose the proper size of a network for a given task is an important and practical problem. One popular approach to tackling this problem is to start with an oversized network and then prune it to a smaller size so as to achieve less computational complexity and better performance in generalization. This paper presents a pruning technique, by means of a quantified sensitivity measure, to remove as many neurons as possible, those with the least relevance, from the hidden layer of a multilayer perceptron (MLP). The sensitivity of an individual neuron is defined as the expectation of its output deviation due to expected input deviation with respect to overall inputs from a continuous interval, and the relevance of the neuron is defined as the multiplication of its sensitivity value by the summation of the absolute values of its outgoing weights. The basic idea for introducing such a relevance measure is that a neuron with less relevance ought to have less effect on its succeeding neurons and thus contribute less to the entire network. The pruning is performed by iteratively training a network to a certain performance criterion and then removing the hidden neuron with the lowest relevance value until no one can further be removed. The pruning technique is novel in its quantified sensitivity measure and so is its relevance measure. Experimental results demonstrate the effectiveness of the pruning technique.