Multiple Comparison Procedures for Determining the Optimal Complexity of a Model

  • Authors:
  • Pedro L. Galindo;Joaquín Pizarro Junquera;Elisa Guerrero

  • Affiliations:
  • -;-;-

  • Venue:
  • Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We aim to determine which of a set of competing models is statistically best, that is, on average. A way to define "on average" is to consider the performance of these algorithms averaged over all the training sets that might be drawn from the underlying distribution. When comparing more than two means, an ANOVA F-test tells you whether the means are significantly different, but it does not tell you which means differ from each other. A simple approach is to test each possible difference by a paired t-test. However, the probability of making at least one type I error increases with the number of tests made. Multiple comparison procedures provide different solutions. We discuss these techniques and apply the well known Bonferroni method in order to determine the optimal degree in polynomial fitting and the optimal number of hidden neurons in feedforward neural networks.