A Neural Network Online Training Algorithm Based on Compound Gradient Vector
AI '02 Proceedings of the 15th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
An improved compound gradient vector based neural network on-line training algorithm
IEA/AIE'2003 Proceedings of the 16th international conference on Developments in applied artificial intelligence
A Modified Backpropagation Training Algorithm for Feedforward Neural Networks
Neural Processing Letters
Parameter by Parameter Algorithm for Multilayer Perceptrons
Neural Processing Letters
A tabu based neural network learning algorithm
Neurocomputing
Variable projections neural network training
Mathematics and Computers in Simulation - Special issue: Applied and computational mathematics - selected papers of the fifth PanAmerican workshop - June 21-25, 2004, Tegucigalpa, Honduras
Application of a radial basis function artificial neural network to seismic data inversion
Computers & Geosciences
An improved training algorithm for feedforward neural network learning based on terminal attractors
Journal of Global Optimization
A fast learning algorithm based on layered hessian approximations and the pseudoinverse
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described. The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps. The first step is the descent of the error functional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimization strategy. In the second step, each linear block is optimized separately by using a least squares (LS) criterion. To demonstrate the effectiveness of the new approach, a detailed treatment of a gradient descent in the neuron space is conducted. The main properties of this approach are the higher speed of convergence with respect to methods that employ an ordinary gradient descent in the weight space backpropagation (BP), better numerical conditioning, and lower computational cost compared to techniques based on the Hessian matrix. The numerical stability is assured by the use of robust LS linear system solvers, operating directly on the input data of each layer. Experimental results obtained in three problems are described, which confirm the effectiveness of the new method