IEEE Transactions on Circuits and Systems Part I: Regular Papers
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
A modified gradient-based neuro-fuzzy learning algorithm and its convergence
Information Sciences: an International Journal
Improved computation for Levenberg-Marquardt training
IEEE Transactions on Neural Networks
Analysis of an evolutionary RBFN design algorithm, CO2RBFN, for imbalanced data sets
Pattern Recognition Letters
Neural network learning without backpropagation
IEEE Transactions on Neural Networks
Training neural networks by rational weight functions
AICI'11 Proceedings of the Third international conference on Artificial intelligence and computational intelligence - Volume Part III
Rotation invariant face detection using convolutional neural networks
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
A global-local optimization approach to parameter estimation of RBF-type models
Information Sciences: an International Journal
DATE '12 Proceedings of the Conference on Design, Automation and Test in Europe
Parallel computation of a new data driven algorithm for training neural networks
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
We present two highly efficient second-order algorithms for the training of multilayer feedforward neural networks. The algorithms are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for nonlinear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization problem. Their implementation requires minimal additional computations compared to a standard LM iteration. Simulations of large scale classical neural-network benchmarks are presented which reveal the power of the two methods to obtain solutions in difficult problems, whereas other standard second-order techniques (including LM) fail to converge.