The nature of statistical learning theory
The nature of statistical learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Towards scalable support vector machines using squashing
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Clustering Algorithms
Less is More: Active Learning with Support Vector Machines
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Lagrangian support vector machines
The Journal of Machine Learning Research
Support vector machine active learning with applications to text classification
The Journal of Machine Learning Research
Classifying large data sets using SVMs with hierarchical clusters
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning concepts from large scale imbalanced data sets using support cluster machines
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
Proceedings of the 24th international conference on Machine learning
Provably Fast Training Algorithms for Support Vector Machines
Theory of Computing Systems
Fast pattern selection for support vector classifiers
PAKDD'03 Proceedings of the 7th Pacific-Asia conference on Advances in knowledge discovery and data mining
Condensed vector machines: learning fast machine for large data
IEEE Transactions on Neural Networks
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Generalized Core Vector Machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
One of the main drawbacks of Support Vector Machines (SVM) is their high computational cost for large data sets. We propose the use of the Leader algorithm as a preprocessing procedure for SVM with large data sets, so that the obtained leaders are used as the training set for the SVM. The result is an algorithm where the Leader algorithm allows to construct a sample of the data set whose granularity level and computational cost are controlled by the threshold parameter. Despite its apparent simplicity, the proposed model obtains similar accuracies to standard LIBSVM with fewer number of support vectors and less execution times.