Regularized neural networks: some convergence rate results
Neural Computation
A Theory of Networks for Approximation and Learning
A Theory of Networks for Approximation and Learning
Testing Error Estimates for Regularization and Radial Function Networks
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Learning methods for radial basis function networks
Future Generation Computer Systems
Learning with generalization capability by kernel methods of bounded complexity
Journal of Complexity
Hi-index | 0.00 |
Regularization theory presents a sound framework to solving supervised learning problems. However, the regularization networks have a large size corresponding to the size of training data. In this work we study a relationship between network complexity, i.e. number of hidden units, and approximation and generalization ability. We propose an incremental hybrid learning algorithm that produces smaller networks with performance similar to original regularization networks.