The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Variable selection using svm based criteria
The Journal of Machine Learning Research
Combined SVM-Based Feature Selection and Classification
Machine Learning
Computational Methods of Feature Selection (Chapman & Hall/Crc Data Mining and Knowledge Discovery Series)
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
Exact 1-Norm Support Vector Machines Via Unconstrained Convex Differentiable Minimization
The Journal of Machine Learning Research
Computer Security: Protecting Digital Resources
Computer Security: Protecting Digital Resources
Hi-index | 0.00 |
Identifying a good feature subset that contributes most to the performance of Lp-norm Support Vector Machines (Lp-SVMs with p=1 or p=2) is an important task. We realize that the Lp-SVMs do not comprehensively consider irrelevant and redundant features, because the Lp-SVMs consider all n full-set features be important for training while skipping other 2n−1 possible feature subsets at the same time. In previous work, we have studied the L1-norm SVM and applied it to the feature selection problem. In this paper, we extend our research to the L2-norm SVM and propose to generalize the Lp-SVMs into one general Lp-norm Support Vector Machine (GLp-SVM) that takes into account all 2n possible feature subsets. We represent the GLp-SVM as a mixed 0-1 nonlinear programming problem (M01NLP). We prove that solving the new proposed M01NLP optimization problem results in a smaller error penalty and enlarges the margin between two support vector hyper-planes, thus possibly giving a better generalization capability of SVMs than solving the traditional Lp-SVMs. Moreover, by following the new formulation we can easily control the sparsity of the GLp-SVM by adding a linear constraint to the proposed M01NLP optimization problem. In order to reduce the computational complexity of directly solving the M01NLP problem, we propose to equivalently transform it into a mixed 0-1 linear programming (M01LP) problem if p=1 or into a mixed 0-1 quadratic programming (M01QP) problem if p=2. The M01LP and M01QP problems are then solved by using the branch and bound algorithm. Experimental results obtained over the UCI, LIBSVM, UNM and MIT Lincoln Lab datasets show that our new proposed GLp-SVM outperforms the traditional Lp-SVMs by improving the classification accuracy by more than 13.49%.