A neural network ensemble method with jittered training data for time series forecasting
Information Sciences: an International Journal
Direct identification of structural parameters from dynamic responses with neural networks
Engineering Applications of Artificial Intelligence
The role of chaotic resonance in cerebellar learning
Neural Networks
Performance of deterministic learning in noisy environments
Neurocomputing
Hi-index | 0.00 |
The possibility of improving the generalization capability of a neural network by introducing additive noise to the training samples is discussed. The network considered is a feedforward layered neural network trained with the back-propagation algorithm. Back-propagation training is viewed as nonlinear least-squares regression and the additive noise is interpreted as generating a kernel estimate of the probability density that describes the training vector distribution. Two specific application types are considered: pattern classifier networks and estimation of a nonstochastic mapping from data corrupted by measurement errors. It is not proved that the introduction of additive noise to the training vectors always improves network generalization. However, the analysis suggests mathematically justified rules for choosing the characteristics of noise if additive noise is used in training. Results of mathematical statistics are used to establish various asymptotic consistency results for the proposed method. Numerical simulations support the applicability of the training method