Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Machine Learning
Neural network design
Two highly efficient second-order algorithms for training feedforward networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
A new algorithm using rational weight functions for training neural networks is proposed in this paper. Distinct from the constant weights obtained by traditional learning algorithms (such as BP, RBF algorithms and so on), the new algorithm finds rational weight functions using reciprocal differences with simple network's topology by two layers. The process of how to get the rational weight function networks from the sample interpolation points is given. The results of numerical simulation show that the rational weight functions can find some useful information inherent in the source of data, and the new algorithm has high approximation accuracy, high learning speed and good performance of generalization.