What size net gives valid generalization?
Neural Computation
A neural root finder of polynomials based on root moments
Neural Computation
A new modified hybrid learning algorithm for feedforward neural networks
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part I
A constructive approach for finding arbitrary roots of polynomials by neural networks
IEEE Transactions on Neural Networks
Magnified gradient function with deterministic weight modification in adaptive learning
IEEE Transactions on Neural Networks
An efficient constrained training algorithm for feedforward networks
IEEE Transactions on Neural Networks
Improved Learning Algorithms of SLFN for Approximating Periodic Function
ICIC '08 Proceedings of the 4th international conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications - with Aspects of Artificial Intelligence
Hi-index | 0.00 |
In this paper, a new algorithm for function approximation is proposed to obtain better generalization performance and faster convergent rate. The new algorithm incorporates the architectural constraints from a priori information of the function approximation problem into Extreme Learning Machine. On one hand, according to Taylor theorem, the activation functions of the hidden neurons in this algorithm are polynomial functions. On the other hand, Extreme Learning Machine is adopted which analytically determines the output weights of single-hidden layer FNN. In theory, the new algorithm tends to provide the best generalization at extremely fast learning speed. Finally, several experimental results are given to verify the efficiency and effectiveness of our proposed learning algorithm.