IEEE Transactions on Neural Networks
An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Layer-by-layer (LBL) algorithm is one of the famous training algorithms for multilayer perceptrons. It converges fast with less computation complexity. Unfortunately, in LBL, when calculating the desired hidden targets, solving of a linear equation set is needed. If the determinant of the coefficient matrix turns to be zero, the solution will not be unique. That results in the stalling problem. Furthermore, a truncation error will be caused by the inversing process of sigmoid function. Based on the idea of goal programming technique, this paper proposes a new method to calculate the hidden targets. A satisfied solution of hidden targets is provided through a goal programming model. Furthermore, the truncation error can be avoided efficiently by means of assigning higher priority to the limitation of variable domain. The effectiveness of the proposed method is demonstrated by the computer simulation of a mushroom classification problem.