On-line learning in neural networks
On-line learning in neural networks
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Ensemble selection from libraries of models
ICML '04 Proceedings of the twenty-first international conference on Machine learning
An empirical comparison of supervised learning algorithms
ICML '06 Proceedings of the 23rd international conference on Machine learning
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
In this paper, we evaluate different Early Stopping Rules (ESR) and their combinations for stopping the training of Multi Layer Perceptrons (MLP) using the stochastic gradient descent, also known as online error backpropagation, before reaching a predefined maximum number of epochs. We focused our evaluation to classification tasks, as most of the works use MLP for classification instead of regression. Early stopping is important for two reasons. On one hand it prevents overfitting and on the other hand it can dramatically reduce the training time. Today, there exists an increasing amount of applications involving unsupervised and automatic training like i.e, in ensemble learning, where automatic stopping rules are necessary for keeping training time low. Current literature is not so specific about endorsing which rule to use, when to use it or what its robustness is. Therefore this issue is revisited in this paper. We tested on PROBEN1, a collection of VCI databases and the MNIST.