ACM Transactions on Mathematical Software (TOMS)
Biological Cybernetics
Deterministic global optimal FNN training algorithms
Neural Networks
Solving systems of nonlinear equations using the nonzero value of the topological degree
ACM Transactions on Mathematical Software (TOMS)
Effective backpropagation training with variable stepsize
Neural Networks
Optimal solution of nonlinear equations
Optimal solution of nonlinear equations
From linear to nonlinear iterative methods
Applied Numerical Mathematics
Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm
IEEE Transactions on Neural Networks
Globally convergent algorithms with local learning rates
IEEE Transactions on Neural Networks
Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process
Journal of Computational and Applied Mathematics - Special issue: The international conference on computational methods in sciences and engineering 2004
Self-scaled conjugate gradient training algorithms
Neurocomputing
Hi-index | 0.00 |
This paper introduces a new class of sign-based training algorithms for neural networks that combine the sign-based updates of the Rprop algorithm with the composite nonlinear Jacobi method. The theoretical foundations of the class are described and a heuristic Rprop-based Jacobi algorithm is empirically investigated through simulation experiments in benchmark pattern classification problems. Numerical evidence shows that this new modification of the Rprop algorithm exhibits improved learning speed in all cases tested, and compares favorably against the Rprop and a recently proposed modification, the improved Rprop.