An equivalence between sparse approximation and support vector machines
Neural Computation
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Machine Learning
Approximation of functions over redundant dictionaries using coherence
SODA '03 Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Approximation bounds for some sparse kernel regression algorithms
Neural Computation
A few notes on statistical learning theory
Advanced lectures on machine learning
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Error Estimates for Approximate Optimization by the Extended Ritz Method
SIAM Journal on Optimization
Learning from Examples as an Inverse Problem
The Journal of Machine Learning Research
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
Algorithms for subset selection in linear regression
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Elastic-net regularization in learning theory
Journal of Complexity
Fixed-Point Continuation for $\ell_1$-Minimization: Methodology and Convergence
SIAM Journal on Optimization
Learning with generalization capability by kernel methods of bounded complexity
Journal of Complexity
Bounds on rates of variable-basis and neural-network approximation
IEEE Transactions on Information Theory
Comparison of worst case errors in linear and neural network approximation
IEEE Transactions on Information Theory
Sequential greedy approximation for certain convex optimization problems
IEEE Transactions on Information Theory
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
On the exponential convergence of matching pursuits in quasi-incoherent dictionaries
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
Weight-decay regularization in reproducing Kernel Hilbert spaces by variable-basis schemes
WSEAS Transactions on Mathematics
On spectral windows in supervised learning from data
Information Processing Letters
Learning with boundary conditions
Neural Computation
A theoretical framework for supervised learning from regions
Neurocomputing
Approximation and Estimation Bounds for Subsets of Reproducing Kernel Kreĭn Spaces
Neural Processing Letters
Hi-index | 0.00 |
Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n-tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.