Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
A Layer-by-Layer Least Squares based Recurrent Networks Training Algorithm: Stalling and Escape
Neural Processing Letters
Single-Iteration Training Algorithm for Multi-Layer Feed-Forward Neural Networks
Neural Processing Letters
Efficient Block Training of Multilayer Perceptrons
Neural Computation
Parameter by Parameter Algorithm for Multilayer Perceptrons
Neural Processing Letters
A Very Fast Learning Method for Neural Networks Based on Sensitivity Analysis
The Journal of Machine Learning Research
Nonnegative Least Squares Learning for the Random Neural Network
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Linear least-squares based methods for neural networks learning
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
A fast semi-linear backpropagation learning algorithm
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A linear learning method for multilayer perceptrons using least-squares
IDEAL'07 Proceedings of the 8th international conference on Intelligent data engineering and automated learning
Hi-index | 0.00 |
An algorithm for the training of multilayered neural networks solely based on linear algebraic methods is presented. Its convergence speed up to a certain limit of learning accuracy is orders of magnitude better than that of the classical back propagation. Furthermore, its learning aptitude increases with the number of internal nodes in the network (contrary to backprop). Especially if the network includes a hidden layer with more nodes than the number of examples to be learned and if the number of nodes in succeeding layers decreases monotonically, the presented algorithm in general finds an exact solution.