Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Extended least squares based algorithm for training feedforward networks
IEEE Transactions on Neural Networks
A recurrent Newton algorithm and its convergence properties
IEEE Transactions on Neural Networks
A Fast Neural Learning Vision System for Crowd Estimation at Underground Stations Platform
Neural Processing Letters
Probabilistic based recursive model for adaptive processing of data structures
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
The limitations of the least squares based trainingalgorithm is dominated by stalling problem andevaluation error by transformation matrix to obtainan unacceptable solution. This paper presents a newapproach for the recurrent networks trainingalgorithm based upon the Layer-by-Layer LeastSquares based algorithm to overcome theaforementioned problems. In accordance with ourproposed algorithm, all the weights are evaluated bythe least squares method without the evaluation oftransformation matrix to speed up the rate ofconvergence. A probabilistic mechanism, based uponthe modified weights updated equations, isintroduced to eliminate the stalling problemexperienced by the pure least squares typecomputation. As a result, the merits of the proposedalgorithm are capable of providing an ability ofescaping from local minima to obtain a good optimalsolution and still maintaining the characteristic offast convergence.