The nature of statistical learning theory
The nature of statistical learning theory
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Optimization of the SVM Kernels Using an Empirical Error Minimization Scheme
SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines
Algorithmic stability and ensemble-based learning
Algorithmic stability and ensemble-based learning
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Information Theory, Inference & Learning Algorithms
Information Theory, Inference & Learning Algorithms
Evolutionary tuning of multiple SVM parameters
Neurocomputing
The evidence framework applied to support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Kernel methods like e.g. Support vector machines (SVM) and Relevance vector machines (RVM) are widely used as data-mining tools. The concept of Bayesian learning exploited in RVMleads to Automatic relevance determination (ARD) which provides sparsity in resulting decision rules. This concept also sets all regularization coefficients without involving computationally expensive cross-validation methods. In this paper we suggest an extension of Bayesian maximal evidence framework which allows to set kernel function most appropriate for the particular task. We propose a local evidence estimation method which establishes a compromise between accuracy and stability of algorithm. In the paper we first briefly describe maximal evidence principle, present model of kernel algorithms as well as our approximations for evidence estimation, and then give results of experimental evaluation. Both classification and regression cases are considered.