The geometry of ill-conditioning
Journal of Complexity
Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
An equivalence between sparse approximation and support vector machines
Neural Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Dynamic Programming
Some remarks on the condition number of a real random square matrix
Journal of Complexity
Error Estimates for Approximate Optimization by the Extended Ritz Method
SIAM Journal on Optimization
Bounds on rates of variable-basis and neural-network approximation
IEEE Transactions on Information Theory
Comparison of worst case errors in linear and neural network approximation
IEEE Transactions on Information Theory
A recursive algorithm for nonlinear least-squares problems
Computational Optimization and Applications
Testing Error Estimates for Regularization and Radial Function Networks
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
Accuracy of suboptimal solutions to kernel principal component analysis
Computational Optimization and Applications
Weight-decay regularization in reproducing Kernel Hilbert spaces by variable-basis schemes
WSEAS Transactions on Mathematics
Hi-index | 0.00 |
Learning from data with generalization capability is studied in the framework of minimization of regularized empirical error functionals over nested families of hypothesis sets with increasing model complexity. For Tikhonov's regularization with kernel stabilizers, minimization over restricted hypothesis sets containing for a fixed integer n only linear combinations of all n-tuples of kernel functions is investigated. Upper bounds are derived on the rate of convergence of suboptimal solutions from such sets to the optimal solution achievable without restrictions on model complexity. The bounds are of the form 1/√n multiplied by a term that depends on the size of the sample of empirical data, the vector of output data, the Gram matrix of the kernel with respect to the input data, and the regularization parameter.