Improved Learning Algorithms of SLFN for Approximating Periodic Function
ICIC '08 Proceedings of the 4th international conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications - with Aspects of Artificial Intelligence
Boundedness and convergence of online gradient method with penalty for feedforward neural networks
IEEE Transactions on Neural Networks
Local coupled feedforward neural network
Neural Networks
The multi-phase method in fast learning algorithms
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
ICIC'09 Proceedings of the Intelligent computing 5th international conference on Emerging intelligent computing technology and applications
Injecting Chaos in Feedforward Neural Networks
Neural Processing Letters
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Addressing the local minima problem by output monitoring and modification algorithms
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Magnified gradient function to improve first-order gradient-based learning algorithms
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Harnessing chaotic activation functions in training neural network
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.00 |
This work presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.