Lessons in neural network training: overfitting may be harder than expected
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Fast training MLP networks with Lo-Shu data sampling
AIKED'09 Proceedings of the 8th WSEAS international conference on Artificial intelligence, knowledge engineering and data bases
Hi-index | 0.00 |
Generalization is one of major concerns for neural network training. In common practice, the number of weights in a MLP network is assumed to be the number of free parameters. This assumption leads to a conclusion: large MLP networks will generalize poorly if their sizes exceed the necessary capacity. However, individual weight in MLP network may not stay as a free parameter since operational condition for hidden neurons alters during the course of training. There have been studies showing that larger networks appear to generalize as well as smaller networks, sometimes even better. Therefore this paper constructs a new perspective on MLP's free parameters to address the issue of generalization.