The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Inductive learning algorithms and representations for text categorization
Proceedings of the seventh international conference on Information and knowledge management
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Face Recognition Using Component-Based SVM Classification and Morphable Models
SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines
Support Vector Machines for Text Categorization
HICSS '03 Proceedings of the 36th Annual Hawaii International Conference on System Sciences (HICSS'03) - Track 4 - Volume 4
KMOD " A Tw o-Parameter SVM Kernel for Pattern Recognition
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
Efficient optimization of support vector machine learning parameters for unbalanced datasets
Journal of Computational and Applied Mathematics
A support vector method for optimizing average precision
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Evolved term-weighting schemes in Information Retrieval: an analysis of the solution space
Artificial Intelligence Review
Using uneven margins SVM and perceptron for information extraction
CONLL '05 Proceedings of the Ninth Conference on Computational Natural Language Learning
On the importance of parameter tuning in text categorization
PSI'06 Proceedings of the 6th international Andrei Ershov memorial conference on Perspectives of systems informatics
SVM based learning system for information extraction
Proceedings of the First international conference on Deterministic and Statistical Methods in Machine Learning
Hi-index | 0.00 |
Support Vector Machines (SVM) is widely considered to be the best algorithm for text classification because it is based on a well-founded theory (SRM): in the separable case it provides the best result possible for a given set of separation functions, and therefore it does not require tuning. In this paper we scrutinize these suppositions, and encounter some paradoxes. In a large-scale experiment it is shown that even in the separable case SVM's extension to non-separable data may give a better result by minimizing the confidence interval of the risk. However, the use of this extension necessitates the tuning of the complexity constant. Furthermore, the use of SVM for optimizing precision and recall through the F function necessitates the tuning of the threshold found by SVM. But the tuned classifier does not generalize well. Furthermore, a more precise definition is given to the notion of training errors.