Bayesian methods for adaptive models
Bayesian methods for adaptive models
Neural networks and the bias/variance dilemma
Neural Computation
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Asymptotic statistical theory of overtraining and cross-validation
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Some issues about the generalization of ANN training are investigated through experiments with several synthetic time series and real world time series. One commonly accepted view is that when the ratio of the training sample size to the number of weights is larger than 30, the overfitting will not occur. However, it is found that even with the ratio higher than 30, overfitting still exists. In cross-validated early stopping, the ratio of cross-validation data size to training data size has no significant impact on the testing error. For stationary time series, 10% may be a practical choice. Both Bayesian regularization method and the cross-validated early stopping method are helpful when the ratio of training sample size to the number of weights is less than 20. However, the performance of early stopping is highly variable. Bayesian method outperforms the early stopping method in most cases, and in some cases even outperforms no-stop training when the training data set is large.