MLP in layer-wise form with applications to weight decay
Neural Computation
Efficient Block Training of Multilayer Perceptrons
Neural Computation
Parameter by Parameter Algorithm for Multilayer Perceptrons
Neural Processing Letters
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Improvements to the conventional layer-by-layer BP algorithm
ICIC'05 Proceedings of the 2005 international conference on Advances in Intelligent Computing - Volume Part II
Hi-index | 0.00 |
A faster new learning algorithm to adjust the weights of the multilayer feedforward neural network is proposed. In this new algorithm, the weight matrix (W2) of the output layer and the output vector (Y) of the previous layer are treated as two variable sets. An optimal solution pair (W2*,YP*) is found to minimize the sum-square-error of the patterns input. YP* is then used as the desired output of the previous layer. The optimal weight matrix and layer output vector of the hidden layers in the network is found with the same method as that used for the output layer. In addition, the dynamic forgetting factors method makes the proposed new algorithm even more powerful in dynamic system identification. Computer simulation shows that the new algorithm outmatches other learning algorithms both in converging speed and in computation time required