Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Data Mining with Computational Intelligence (Advanced Information and Knowledge Processing)
Data Mining with Computational Intelligence (Advanced Information and Knowledge Processing)
Neural-network classifiers for recognizing totally unconstrained handwritten numerals
IEEE Transactions on Neural Networks
Improving model accuracy using optimal linear combinations of trained neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In this paper, an effective batch training algorithm is developed for feed-forward networks such as the multilayer perceptron. First, the effects of input transforms are reviewed and explained, using the concept of equivalent networks. Next, a non-singular diagonal transform matrix for the inputs is proposed. Use of this transform is equivalent to altering the input gains in the network. Newton's method is used to solve for the input gains and an optimal learning factor. In several examples, it is shown that the final algorithm is a reasonable compromise between first order training methods and Levenburg-Marquardt.