Multilayer feedforward networks are universal approximators
Neural Networks
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
A novel learning algorithm called BPWA for feedforward neural networks is presented, which adjusts the weights during both forward phase and backward phase. It calculates the minimum norm square solution as the weights between the hidden layer and output layer in the forward pass, while the backward pass adjusts the weights connecting the input layer to hidden layer according to error gradient descent algorithm. The algorithm is compared with Extreme learning Machine, BP algorithm and LMBP algorithm on function approximation and classification tasks. The experiments’ results demonstrate that the proposed algorithm performs well.