Monitoring MLP's free parameters for generalization

  • Authors:
  • Hung Han Chen

  • Affiliations:
  • Graphion LLC, Jacksonville, FL

  • Venue:
  • AIKED'09 Proceedings of the 8th WSEAS international conference on Artificial intelligence, knowledge engineering and data bases
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Generalization is one of major concerns for neural network training. In common practice, the number of weights in a MLP network is assumed to be the number of free parameters. This assumption leads to a conclusion: large MLP networks will generalize poorly if their sizes exceed the necessary capacity. However, individual weight in MLP network may not stay as a free parameter since operational condition for hidden neurons alters during the course of training. There have been studies showing that larger networks appear to generalize as well as smaller networks, sometimes even better. Therefore this paper constructs a new perspective on MLP's free parameters to address the issue of generalization.