Fuzzy systems with overlapping Gaussian concepts: approximation properties in Sobolev norms
Fuzzy Sets and Systems - Fuzzy models
On different facets of regularization theory
Neural Computation
Testing Error Estimates for Regularization and Radial Function Networks
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Consistent Sobolev regression via fuzzy systems with overlapping concepts
Fuzzy Sets and Systems
Hybrid learning of regularization neural networks
ICAISC'10 Proceedings of the 10th international conference on Artifical intelligence and soft computing: Part II
Mathematical and Computer Modelling: An International Journal
Hi-index | 0.00 |
In a recent paper, Poggio and Girosi (1990) proposed a class ofneural networks obtained from the theory of regularization.Regularized networks are capable of approximating arbitrarily wellany continuous function on a compactum. In this paper we considerin detail the learning problem for the one-dimensional case. Weshow that in the case of output data observed with noise,regularized networks are capable of learning and approximating (oncompacta) elements of certain classes of Sobolev spaces, known asreproducing kernel Hilbert spaces (RKHS), at a nonparametric ratethat optimally exploits the smoothness properties of the unknownmapping. In particular we show that the total squared error, givenby the sum of the squared bias and the variance, will approach zeroat a rate ofn(-2m)/(2m+1), wherem denotes the order of differentiability of the true unknownfunction. On the other hand, if the unknown mapping is a continuousfunction but does not belong to an RKHS, then there still exists aunique regularized solution, but this is no longer guaranteed toconverge in mean square to a well-defined limit. Further, even ifsuch a solution converges, the total squared error is bounded awayfrom zero for all n sufficiently large.