Deterministic convergence of an online gradient method for neural networks
Journal of Computational and Applied Mathematics - Selected papers of the international symposium on applied mathematics, August 2000, Dalian, China
An Incremental Method for Solving Convex Finite Min-Max Problems
Mathematics of Operations Research
On solving the Lagrangian dual of integer programs via an incremental approach
Computational Optimization and Applications
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
IEEE Transactions on Neural Networks
Computational Optimization and Applications
Hi-index | 0.00 |
We define online algorithms for neural network training, based on the construction of multiple copies of the network, which are trained by employing different data blocks. It is shown that suitable training algorithms can be defined, in a way that the disagreement between the different copies of the network is asymptotically reduced, and convergence toward stationary points of the global error function can be guaranteed. Relevant features of the proposed approach are that the learning rate must be not necessarily forced to zero and that real-time learning is permitted.