Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
AI Game Programming Wisdom
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Factor Analysis Algorithm with Mercer Kernel
IITSI '09 Proceedings of the 2009 Second International Symposium on Intelligent Information Technology and Security Informatics
An overview of statistical learning theory
IEEE Transactions on Neural Networks
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Support Vector Machine (SVM) is a powerful classification technique based on the idea of Structural Risk Minimization. The main idea behind the Support Vector Machine is to separate the classes with a surface that maximizes the margin between them. Key Property of SVM is Kernels. However, a proper Kernel Function for a certain problem is dependent on the specific dataset and as such there is no good method on how to choose a Kernel Function. In this paper, the choice of the Kernel Function is studied empirically and optimal results are achieved. The performance of the SVM is illustrated by extensive experimental results, which indicate that with suitable Kernel and its parameters, better classification rate, Error Rate, Support Vectors and Support Vector percentage can be obtained. The experimental results of the three datasets show that opting a kernel in random is not always the best choice to achieve high generalization of classifier although it is often the default choice.