Improving generalization capabilities of dynamic neural networks
Neural Computation
Noise-Resistant Fitting for Spherical Harmonics
IEEE Transactions on Visualization and Computer Graphics
A recursive algorithm for nonlinear least-squares problems
Computational Optimization and Applications
FNN (Feedforward Neural Network) Training Method Based on Robust Recursive Least Square Method
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Part II--Advances in Neural Networks
The Local True Weight Decay Recursive Least Square Algorithm
Neural Information Processing
Lyapunov theory-based multilayered neural network
IEEE Transactions on Circuits and Systems II: Express Briefs
Kernel Width Optimization for Faulty RBF Neural Networks with Multi-node Open Fault
Neural Processing Letters
On the selection of weight decay parameter for faulty networks
IEEE Transactions on Neural Networks
Generalization error of faulty MLPs with weight decay regularizer
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: models and applications - Volume Part II
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. In this paper, we derive two modified RLS algorithms to tackle this problem. In the first algorithm, namely, the true weight decay RLS (TWDRLS) algorithm, we consider a modified energy function whereby the weight decay effect remains constant, irrespective of the number of learning epochs. The second version, the input perturbation RLS (IPRLS) algorithm, is derived by requiring robustness in its prediction performance to input perturbations. Simulation results show that both algorithms improve the generalization capability of the trained network