Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
On the Problem of Local Minima in Backpropagation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving the convergence of the back-propagation algorithm
Neural Networks
Fast training of multilayer perceptrons
IEEE Transactions on Neural Networks
How to Automate Neural Net Based Learning
MLDM '01 Proceedings of the Second International Workshop on Machine Learning and Data Mining in Pattern Recognition
ACMD: A Practical Tool for Automatic Neural Net Based Learning
ISMDA '01 Proceedings of the Second International Symposium on Medical Data Analysis
A novel generalized congruence neural networks
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
The conventional back-propagation algorithm isbasically a gradient-descent method, it has theproblems of local minima and slow convergence. A newgeneralized back-propagation algorithm which caneffectively speed up the convergence rate and reducethe chance of being trapped in local minima isintroduced. The new back-propagation algorithm is tochange the derivative of the activation function so asto magnify the backward propagated error signal, thusthe convergence rate can be accelerated and the localminimum can be escaped. In this letter, we alsoinvestigate the convergence of the generalizedback-propagation algorithm with constant learningrate. The weight sequences in generalizedback-propagation algorithm can be approximated by acertain ordinary differential equation (ODE). Whenthe learning rate tends to zero, the interpolatedweight sequences of generalized back-propagationconverge weakly to the solution of associated ODE.