A view of unconstrained optimization
Optimization
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Rescaling of variables in back propagation learning
Neural Networks
Effective backpropagation training with variable stepsize
Neural Networks
Optimization: algorithms and consistent approximations
Optimization: algorithms and consistent approximations
Iterative solution of nonlinear equations in several variables
Iterative solution of nonlinear equations in several variables
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Self-scaled conjugate gradient training algorithms
Neurocomputing
Determining the number of real roots of polynomials through neural networks
Computers & Mathematics with Applications
Hi-index | 0.00 |
A mathematical framework for the convergence analysis of the well-known Quickprop method is described. Furthermore, we propose a modification of this method that exhibits improved convergence speed and stability, and, at the same time, alleviates the use of heuristic learning parameters. Simulations are conducted to compare and evaluate the performance of the new modified Quickprop algorithm with various popular training algorithms. The results of the experiments indicate that the increased convergence rates achieved by the proposed algorithm, affect by no means its generalization capability and stability.