Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure

  • Authors:
  • Xiaoqin Zeng;Daniel S. Yeung

  • Affiliations:
  • Department of Computer Science and Engineering, Hohai University, Nanjing, China;Department of Computing, Hong Kong Polytechnic University, Hong Kong

  • Venue:
  • Neurocomputing
  • Year:
  • 2006

Quantified Score

Hi-index 0.01

Visualization

Abstract

In the design of neural networks, how to choose the proper size of a network for a given task is an important and practical problem. One popular approach to tackling this problem is to start with an oversized network and then prune it to a smaller size so as to achieve less computational complexity and better performance in generalization. This paper presents a pruning technique, by means of a quantified sensitivity measure, to remove as many neurons as possible, those with the least relevance, from the hidden layer of a multilayer perceptron (MLP). The sensitivity of an individual neuron is defined as the expectation of its output deviation due to expected input deviation with respect to overall inputs from a continuous interval, and the relevance of the neuron is defined as the multiplication of its sensitivity value by the summation of the absolute values of its outgoing weights. The basic idea for introducing such a relevance measure is that a neuron with less relevance ought to have less effect on its succeeding neurons and thus contribute less to the entire network. The pruning is performed by iteratively training a network to a certain performance criterion and then removing the hidden neuron with the lowest relevance value until no one can further be removed. The pruning technique is novel in its quantified sensitivity measure and so is its relevance measure. Experimental results demonstrate the effectiveness of the pruning technique.