An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
Authorship verification as a one-class classification problem
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Linguistic correlates of style: authorship classification with deep linguistic analysis features
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Gene extraction for cancer diagnosis by support vector machines-An improvement
Artificial Intelligence in Medicine
Classification of Anti-learnable Biological and Synthetic Data
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
A Hilbert Space Embedding for Distributions
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Continuity of Performance Metrics for Thin Feature Maps
ALT '07 Proceedings of the 18th international conference on Algorithmic Learning Theory
Microarray Design Using the Hilbert---Schmidt Independence Criterion
PRIB '08 Proceedings of the Third IAPR International Conference on Pattern Recognition in Bioinformatics
A General Framework for Analyzing Data from Two Short Time-Series Microarray Experiments
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Feature selection via dependence maximization
The Journal of Machine Learning Research
Support vector machines for anti-pattern detection
Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering
Hi-index | 0.00 |
The SVM based Recursive Feature Elimination (RFE-SVM) algorithm is a popular technique for feature selection, used in natural language processing and bioinformatics. Recently it was demonstrated that a small regularisation constant C can considerably improve the performance of RFE-SVM on microarray datasets. In this paper we show that further improvements are possible if the explicitly computable limit C →0 is used. We prove that in this limit most forms of SVM and ridge regression classifiers scaled by the factor $\frac{1}{C}$ converge to a centroid classifier. As this classifier can be used directly for feature ranking, in the limit we can avoid the computationally demanding recursion and convex optimisation in RFE-SVM. Comparisons on two text based author verification tasks and on three genomic microarray classification tasks indicate that this straightforward method can surprisingly obtain comparable (at times superior) performance and is about an order of magnitude faster.