IEEE Transactions on Neural Networks
A generalized learning paradigm exploiting the structure of feedforward neural networks
IEEE Transactions on Neural Networks
Extended least squares based algorithm for training feedforward networks
IEEE Transactions on Neural Networks
A new error function at hidden layers for past training of multilayer perceptrons
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a parameter by parameter (PBP) algorithm for speeding up the training of multilayer perceptrons (MLP). This new algorithm uses an approach similar to that of the layer by layer (LBL) algorithm, taking into account the input errors of the output layer and hidden layer. The proposed PBP algorithm, however, is not burdened by the need to calculate the gradient of the error function. In each iteration step, the weights or thresholds can be optimized directly one by one with other variables fixed. Four classes of solution equations for parameters of networks are deducted. The effectiveness of the PBP algorithm is demonstrated using two benchmarks. In comparisons with the BP algorithm with momentum (BPM) and the conventional LBL algorithms, PBP obtains faster convergences and better simulation performances.