Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Robust Classification for Imprecise Environments
Machine Learning
AI'06 Proceedings of the 19th Australian joint conference on Artificial Intelligence: advances in Artificial Intelligence
Hi-index | 0.01 |
We study the class of hypothesis composed of linear functionals superimposed with smooth feature maps. We show that for "typical" smooth feature map, the pointwise convergence of hypothesis implies the convergence of some standard metrics such as error rate or area under ROC curve with probability 1 in selection of the test sample from a (Lebesgue measurable) probability density. Proofs use transversality theory. The crux is to show that for every "typical", sufficiently smooth feature map into a finite dimensional vector space, the counter-image of every affine hyperplane has Lebesgue measure 0.The results extend to every real analytic, in particular polynomial, feature map if its domain is connected and the limit hypothesis is non-constant. In the process we give an elementary proof of the fundamental lemma that locus of zeros of a real analytic function on a connected domain either fills the whole space or forms a subset of measure 0.