Training with noise is equivalent to Tikhonov regularization
Neural Computation
Fault Tolerant Design Using Error Correcting Code for Multilayer Neural Networks
Proceedings of the The IEEE International Workshop on Defect and Fault Tolerance in VLSI Systems
A probabilistic model for the fault tolerance of multilayer perceptrons
IEEE Transactions on Neural Networks
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
Injecting Chaos in Feedforward Neural Networks
Neural Processing Letters
Hi-index | 0.00 |
The generalization ability of feedforward neural networks (NNs) dependson the size of training set and the feature of the training patterns.Theoretically the best classification property is obtained if all possiblepatterns are used to train the network, which is practically impossible. Inthis paper a new noise injection technique is proposed, that is noiseinjection into the hidden neurons at the summation level. Assuming that thetest patterns are drawn from the same population used to generate thetraining set, we show that noise injection into hidden neurons is equivalentto training with noisy input patterns (i.e., larger training set). Thesimulation results indicate that the networks trained with the proposedtechnique and the networks trained with noisy input patterns have almost thesame generalization and fault tolerance abilities. The learning timerequired by the proposed method is considerably less than that required bythe training with noisy input patterns, and it is almost the same as thatrequired by the standard backpropagation using normal input patterns.