Structured matrices in unconstrained minimization methods
Contemporary mathematics
A Hybrid Training Algorithm for Feedforward Neural Networks
Neural Processing Letters
Low complexity secant quasi-Newton minimization algorithms for nonconvex functions
Journal of Computational and Applied Mathematics
Minimization of a Detail-Preserving Regularization Functional for Impulse Noise Removal
Journal of Mathematical Imaging and Vision
Approximation BFGS methods for nonlinear image restoration
Journal of Computational and Applied Mathematics
Self-scaled conjugate gradient training algorithms
Neurocomputing
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
In this paper, we present a new class of quasi-Newton methods for an effective learning in large multilayer perceptron (MLP)-networks. The algorithms introduced in this work, named LQN, utilize an iterative scheme of a generalized BFGS-type method, involving a suitable family of matrix algebras L. The main advantages of these innovative methods are based upon the fact that they have an O(nlogn) complexity per step and that they require O(n) memory allocations. Numerical experiences, performed on a set of standard benchmarks of MLP-networks, show the competitivity of the LQN methods, especially for large values of n.