The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
Provably Fast Training Algorithms for Support Vector Machines
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Shrinkage estimator generalizations of Proximal Support Vector Machines
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Classifying large data sets using SVMs with hierarchical clusters
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Hi-index | 0.00 |
Support vector machine (SVM) is a well-known method used for pattern recognition and machine learning. However, training a SVM is very costly in terms of time and memory consumption when the data set is large. In contrast, the SVM decision function is fully determined by a small subset of the training data, called support vectors. Therefore, removing any training samples that are not relevant to support vectors might have no effect on building the proper decision function. In this paper,an effective hybrid method is proposed to remove from the training set the data that is irrelevant to the final decision function, and thus the number of vectors for SVM training becomes small and the training time can be decreased greatly. Experimental results show that a significant amount of training data can be discarded by our methods without compromising the generalization capability of SVM.