Convergence Analysis of Batch Gradient Algorithm for Three Classes of Sigma-Pi Neural Networks
Neural Processing Letters
Boundedness of a batch gradient method with penalty for feedforward neural networks
MATH'07 Proceedings of the 12th WSEAS International Conference on Applied Mathematics
Convergence of an online gradient algorithm with penalty for two-layer neural networks
MATH'06 Proceedings of the 10th WSEAS International Conference on APPLIED MATHEMATICS
An Improved Fuzzy Neural Network for Ultrasonic Motors Control
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Latent semantic analysis for text categorization using neural network
Knowledge-Based Systems
Information Processing and Management: an International Journal
An automatically constructed thesaurus for neural network based document categorization
Expert Systems with Applications: An International Journal
Combining neural networks and semantic feature space for email classification
Knowledge-Based Systems
Automatic thesaurus construction for spam filtering using revised back propagation neural network
Expert Systems with Applications: An International Journal
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
IEEE Transactions on Neural Networks
When does online BP training converge?
IEEE Transactions on Neural Networks
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Expert Systems with Applications: An International Journal
Text categorization based on artificial neural networks
ICONIP'06 Proceedings of the 13th international conference on Neural information processing - Volume Part III
A novel algorithm for text categorization using improved back-propagation neural network
FSKD'06 Proceedings of the Third international conference on Fuzzy Systems and Knowledge Discovery
An efficient parallel neural network-based multi-instance learning algorithm
The Journal of Supercomputing
Convergence of chaos injection-based batch backpropagation algorithm for feedforward neural networks
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.01 |
Online gradient methods are widely used for training feedforward neural networks. We prove in this paper a convergence theorem for an online gradient method with variable step size for backward propagation (BP) neural networks with a hidden layer. Unlike most of the convergence results that are of probabilistic and nonmonotone nature, the convergence result that we establish here has a deterministic and monotone nature.