Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Effective backpropagation training with variable stepsize
Neural Networks
A class of gradient unconstrained minimization algorithms with adaptive stepsize
Journal of Computational and Applied Mathematics
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Better Prediction of Protein Cellular Localization Sites with the it k Nearest Neighbors Classifier
Proceedings of the 5th International Conference on Intelligent Systems for Molecular Biology
From linear to nonlinear iterative methods
Applied Numerical Mathematics
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm
IEEE Transactions on Neural Networks
Training feedforward networks with the Marquardt algorithm
IEEE Transactions on Neural Networks
Universal parameter optimisation in games based on SPSA
Machine Learning
Variable step search algorithm for feedforward networks
Neurocomputing
Self-scaled conjugate gradient training algorithms
Neurocomputing
Study of neural net training methods in parallel and distributed architectures
Future Generation Computer Systems
Improved prediction of trans-membrane spans in proteins using an artificial neural network
CIBCB'09 Proceedings of the 6th Annual IEEE conference on Computational Intelligence in Bioinformatics and Computational Biology
Hi-index | 0.01 |
In this paper, a new globally convergent modification of the Resilient Propagation-Rprop algorithm is presented. This new addition to the Rprop family of methods builds on a mathematical framework for the convergence analysis that ensures that the adaptive local learning rates of the Rprop's schedule generate a descent search direction at each iteration. Simulation results in six problems of the PROBEN1 benchmark collection show that the globally convergent modification of the Rprop algorithm exhibits improved learning speed, and compares favorably against the original Rprop and the Improved Rprop, a recently proposed Rrpop modification.