An equivalence between sparse approximation and support vector machines
Neural Computation
A Comparison between the Tikhonov and the Bayesian Approaches to Calculate Regularisation Matrices
Neural Processing Letters
Structural Modelling with Sparse Kernels
Machine Learning
IEEE Intelligent Systems
Gaussian process classification for segmenting and annotating sequences
ICML '04 Proceedings of the twenty-first international conference on Machine learning
EFFECTIVENESS OF SUPPORT VECTOR MACHINE FOR CRIME HOT-SPOTS PREDICTION
Applied Artificial Intelligence
3D Image Analysis and Artificial Intelligence for Bone Disease Classification
Journal of Medical Systems
Kernel based learning methods: regularization networks and RBF networks
Proceedings of the First international conference on Deterministic and Statistical Methods in Machine Learning
Sum and product kernel regularization networks
ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
Fast learning rates for sparse quantile regression problem
Neurocomputing
Hi-index | 0.00 |
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT\&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.