The geometry of ill-conditioning
Journal of Complexity
Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing
Regularization theory and neural networks architectures
Neural Computation
Machine Learning
An equivalence between sparse approximation and support vector machines
Neural Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Dynamic Programming
Some remarks on the condition number of a real random square matrix
Journal of Complexity
Bounds on rates of variable-basis and neural-network approximation
IEEE Transactions on Information Theory
Comparison of worst case errors in linear and neural network approximation
IEEE Transactions on Information Theory
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Estimates of Data Complexity in Neural-Network Learning
SOFSEM '07 Proceedings of the 33rd conference on Current Trends in Theory and Practice of Computer Science
Computational Optimization and Applications
On spectral windows in supervised learning from data
Information Processing Letters
Hybrid learning of regularization neural networks
ICAISC'10 Proceedings of the 10th international conference on Artifical intelligence and soft computing: Part II
Estimates on weight-decay regularization by variable-basis schemes
ACS'09 Proceedings of the 9th WSEAS international conference on Applied computer science
Learning with boundary conditions
Neural Computation
Regularized vector field learning with sparse approximation for mismatch removal
Pattern Recognition
A theoretical framework for supervised learning from regions
Neurocomputing
Hi-index | 0.00 |
Learning from data with generalization capability is studied in the framework of minimization of regularized empirical error functionals over nested families of hypothesis sets with increasing model complexity. For Tikhonov's regularization with kernel stabilizers, minimization over restricted hypothesis sets containing for a fixed integer n only linear combinations of all n-tuples of kernel functions is investigated. Upper bounds are derived on the rate of convergence of suboptimal solutions from such sets to the optimal solution achievable without restrictions on model complexity. The bounds are of the form 1/n multiplied by a term that depends on the size of the sample of empirical data, the vector of output data, the Gram matrix of the kernel with respect to the input data, and the regularization parameter.