Model Selection and Error Estimation
Machine Learning
Nonlinear Function Learning and Classification Using Optimal Radial Basis Function Networks
MLDM '01 Proceedings of the Second International Workshop on Machine Learning and Data Mining in Pattern Recognition
Mathematics and Computers in Simulation
Density estimation by the penalized combinatorial method
Journal of Multivariate Analysis
Nonlinear Function Learning Using Radial Basis Function Networks: Convergence and Rates
ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
Quasi-parametric recovery of Hammerstein system nonlinearity by smart model selection
ICAISC'10 Proceedings of the 10th international conference on Artifical intelligence and soft computing: Part II
Artificial Intelligence Review
Nonlinear function learning by the normalized radial basis function networks
ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
Approximation and estimation bounds for free knot splines
Computers & Mathematics with Applications
Incorporating linear discriminant analysis in neural tree for multidimensional splitting
Applied Soft Computing
Universal learning using free multivariate splines
Neurocomputing
Generalization ability of fractional polynomial models
Neural Networks
Hi-index | 0.00 |
We apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l1 metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived