Terminal attractors in neural networks
Neural Networks
Acceleration Techniques for the Backpropagation Algorithm
Proceedings of the EURASIP Workshop 1990 on Neural Networks
A Very Fast Learning Method for Neural Networks Based on Sensitivity Analysis
The Journal of Machine Learning Research
Recursive Adaptation of Stepsize Parameter for Non-stationary Environments
PRIMA '09 Proceedings of the 12th International Conference on Principles of Practice in Multi-Agent Systems
IEEE Transactions on Neural Networks
Adapting bias by gradient descent: an incremental version of delta-bar-delta
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
An improved training algorithm for feedforward neural network learning based on terminal attractors
Journal of Global Optimization
A real-time learning algorithm for a multilayered neural networkbased on the extended Kalman filter
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks
IEEE Transactions on Neural Networks
Training feedforward networks with the Marquardt algorithm
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper considers on-line training of feedforward neural networks. Training examples are only available through sampling from a certain, possibly infinite, distribution. In order to make the learning process autonomous, one can employ Extended Kalman Filter or stochastic steepest descent with adaptively adjusted step-sizes. Here the latter is considered. A scheme of determining step-sizes is introduced that satisfies the following requirements: (i) it does not need any auxiliary problem-dependent parameters, (ii) it does not assume any particular loss function that the training process is intended to minimize, (iii) it makes the learning process stable and efficient. An experimental study with several approximation problems is presented. Within this study the presented approach is compared with Extended Kalman Filter and LFI, with satisfactory results.