A nonmonotone line search technique for Newton's method
SIAM Journal on Numerical Analysis
Analysis of a self-scaling quasi-Newton method
Mathematical Programming: Series A and B
Multi-layer perceptron based modelling of nonlinear systems
Fuzzy Sets and Systems - Special issue on neuro-fuzzy techniques and applications
Numerical experience with a class of self-scaling Quasi-Newton algorithms
Journal of Optimization Theory and Applications
A Variable Memory Quasi-Newton Training Algorithm
Neural Processing Letters
Introduction to Connectionist Modelling of Cognitive Processes
Introduction to Connectionist Modelling of Cognitive Processes
Machine Learning for Sequential Data: A Review
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Recurrent Neural Network Architectures: An Overview
Adaptive Processing of Sequences and Data Structures, International Summer School on Neural Networks, "E.R. Caianiello"-Tutorial Lectures
Gradient Based Learning Methods
Adaptive Processing of Sequences and Data Structures, International Summer School on Neural Networks, "E.R. Caianiello"-Tutorial Lectures
Intelligent optimal control with dynamic neural networks
Neural Networks
Spatiotemporal Connectionist Networks: A Taxonomy and Review
Neural Computation
ISDA '06 Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications - Volume 01
Fast parallel off-line training of multilayer perceptrons
IEEE Transactions on Neural Networks
Supervised neural networks for the classification of structures
IEEE Transactions on Neural Networks
Deterministic nonmonotone strategies for effective training of multilayer perceptrons
IEEE Transactions on Neural Networks
Parallel nonlinear optimization techniques for training neural networks
IEEE Transactions on Neural Networks
Trajectory priming with dynamic fuzzy networks in nonlinear optimal control
IEEE Transactions on Neural Networks
Use of a quasi-Newton method in a feedforward neural network construction algorithm
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Study of neural net training methods in parallel and distributed architectures
Future Generation Computer Systems
Hi-index | 0.00 |
In this paper, we propose an adaptive BFGS, which uses a selfadaptive scaling factor for the Hessian matrix and is equipped with nonmonotone strategy. Our experimental evaluation using different recurrent networks architectures provides evidence that the proposed approach trains successfully recurrent networks of various architectures, inheriting the benefits of the BFGS and, at the same time, alleviating some of its limitations.