Trainable fusion rules. II. Small sample-size effects

  • Authors:
  • Šarnas Raudys

  • Affiliations:
  • Institute of Mathematics and Informatics, Akademijos 4, Vilnius 08663, Lithuania

  • Venue:
  • Neural Networks
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Profound theoretical analysis is performed of small-sample properties of trainable fusion rules to determine in which situations neural network ensembles can improve or degrade classification results. We consider small sample effects, specific only to multiple classifiers system design in the two-category case of two important fusion rules: (1) linear weighted average (weighted voting), realized either by the standard Fisher classifier or by the single-layer perceptron, and (2) the non-linear Behavior-Knowledge-Space method. The small sample effects include: (i) training bias, i.e. learning sample size influence on generalization error of the base experts or of the fusion rule, (ii) optimistic biased outputs of the experts (self-boasting effect) and (iii) sample size impact on determining optimal complexity of the fusion rule. Correction terms developed to reduce the self-boasting effect are studied. It is shown that small learning sets increase classification error of the expert classifiers and damage correlation structure between their outputs. If the sizes of learning sets used to develop the expert classifiers are too small, non-trainable fusion rules can outperform more sophisticated trainable ones. A practical technique to fight sample size problems is a noise injection technique. The noise injection reduces the fusion rule's complexity and diminishes the expert's boasting bias.