Advances in neural information processing systems 2
Training with noise is equivalent to Tikhonov regularization
Neural Computation
Noise injection: theoretical prospects
Neural Computation
On-line learning and stochastic approximations
On-line learning in neural networks
Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization
Neural Processing Letters
A pruning method for the recursive least squared algorithm
Neural Networks
Second Order Derivatives for Network Pruning: Optimal Brain Surgeon
Advances in Neural Information Processing Systems 5, [NIPS Conference]
A Fault-Value Injection Approach for Multiple-Weight-Fault Tolerance of MNNs
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
Assessing the Noise Immunity and Generalization of Radial Basis Function Networks
Neural Processing Letters
Investigating the Fault Tolerance of Neural Networks
Neural Computation
IEEE Transactions on Neural Networks
SNIWD: Simultaneous Weight Noise Injection with Weight Decay for MLP Training
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part I
A study of the effect of noise injection on the training of artificial neural networks
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
An analysis of noise in recurrent neural networks: convergence and generalization
IEEE Transactions on Neural Networks
Synthesis of fault-tolerant feedforward neural networks using minimax optimization
IEEE Transactions on Neural Networks
On the regularization of forgetting recursive least square
IEEE Transactions on Neural Networks
The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A Fault-Tolerant Regularizer for RBF Networks
IEEE Transactions on Neural Networks
Complete and partial fault tolerance of feedforward neural nets
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
On the objective function and learning algorithm for concurrent open node fault
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Hi-index | 0.01 |
In the last two decades, many online fault/noise injection algorithms have been developed to attain a fault tolerant neural network. However, not much theoretical works related to their convergence and objective functions have been reported. This paper studies six common fault/noise-injection-based online learning algorithms for radial basis function (RBF) networks, namely 1) injecting additive input noise, 2) injecting additive/multiplicative weight noise, 3) injecting multiplicative node noise, 4) injecting multiweight fault (random disconnection of weights), 5) injecting multinode fault during training, and 6) weight decay with injecting multinode fault. Based on the Gladyshev theorem, we show that the convergence of these six online algorithms is almost sure. Moreover, their true objective functions being minimized are derived. For injecting additive input noise during training, the objective function is identical to that of the Tikhonov regularizer approach. For injecting additive/multiplicative weight noise during training, the objective function is the simple mean square training error. Thus, injecting additive/multiplicative weight noise during training cannot improve the fault tolerance of an RBF network. Similar to injective additive input noise, the objective functions of other fault/noise-injection-based online algorithms contain a mean square error term and a specialized regularization term.