An equivalence between sparse approximation and support vector machines
Neural Computation
Semiparametric support vector and linear programming machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
Fast Solution of the Radial Basis Function Interpolation Equations: Domain Decomposition Methods
SIAM Journal on Scientific Computing
Everything old is new again: a fresh look at historical approaches in machine learning
Everything old is new again: a fresh look at historical approaches in machine learning
Learning from Examples as an Inverse Problem
The Journal of Machine Learning Research
Probabilistic latent semantic analysis
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Based on the study of a generalized form of representer theorem and a specific trick in constructing kernels, a generic learning model is proposed and applied to support vector machines. An algorithm is obtained which naturally generalizes the bias term of SVM. Unlike the solution of standard SVM which consists of a linear expansion of kernel functions and a bias term, the generalized algorithm maps predefined features onto a Hilbert space as well and takes them into special consideration by leaving part of the space unregularized when seeking a solution in the space. Empirical evaluations have confirmed the effectiveness from the generalization in classification tasks.