Efficient and reliable training of neural networks
HSI'09 Proceedings of the 2nd conference on Human System Interactions
Two highly efficient second-order algorithms for training feedforward networks
IEEE Transactions on Neural Networks
Parallel sequential minimal optimization for the training of support vector machines
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Weighted Piecewise LDA for Solving the Small Sample Size Problem in Face Verification
IEEE Transactions on Neural Networks
Discrete-Time Adaptive Backstepping Nonlinear Control via High-Order Neural Networks
IEEE Transactions on Neural Networks
A New Jacobian Matrix for Optimal Learning of Single-Layer Neural Networks
IEEE Transactions on Neural Networks
Training Two-Layered Feedforward Networks With Variable Projection Method
IEEE Transactions on Neural Networks
Optimized Approximation Algorithm in Neural Networks Without Overfitting
IEEE Transactions on Neural Networks
A Constrained Optimization Approach to Preserving Prior Knowledge During Incremental Training
IEEE Transactions on Neural Networks
Multilayer Potts Perceptrons With Levenberg–Marquardt Learning
IEEE Transactions on Neural Networks
Neural network learning without backpropagation
IEEE Transactions on Neural Networks
Information Sciences: an International Journal
Mathematics and Computers in Simulation
Hi-index | 0.00 |
The improved computation presented in this paper is aimed to optimize the neural networks learning process using Levenberg-Marquardt (LM) algorithm. Quasi-Hessian matrix and gradient vector are computed directly, without Jacobian matrix multiplication and storage. The memory limitation problem for LM training is solved. Considering the symmetry of quasi-Hessian matrix, only elements in its upper/lower triangular array need to be calculated. Therefore, training speed is improved significantly, not only because of the smaller array stored in memory, but also the reduced operations in quasi-Hessian matrix calculation. The improved memory and time efficiencies are especially true for large sized patterns training.