A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
SVM-KM: Speeding SVMs Learning with a priori Cluster Selection and k-Means
SBRN '00 Proceedings of the VI Brazilian Symposium on Neural Networks (SBRN'00)
Sample selection via clustering to construct support vector-like classifiers
IEEE Transactions on Neural Networks
Reducing examples to accelerate support vector regression
Pattern Recognition Letters
Response modeling with support vector machines
Expert Systems with Applications: An International Journal
Fast pattern selection for support vector classifiers
PAKDD'03 Proceedings of the 7th Pacific-Asia conference on Advances in knowledge discovery and data mining
ϵ-Tube based pattern selection for support vector machines
PAKDD'06 Proceedings of the 10th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
Benchmarking local classification methods
Computational Statistics
Hi-index | 0.00 |
SVMs tend to take a very long time to train with a large data set. If "redundant" patterns are identified and deleted in pre-processing, the training time could be reduced significantly. We propose a k-nearest neighbors(k-NN) based pattern selection method. The method tries to select the patterns that are near the decision boundary and that are correctly labeled. The simulations over synthetic data sets showed promising results: (1) By converting a non-separable problem to a separable one, the search for an optimal error tolerance parameter became unnecessary. (2) SVM training time decreased by two orders of magnitude without any loss of accuracy. (3) The redundant SVs were substantially reduced.