A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Classifying large data sets using SVMs with hierarchical clusters
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Fast Nearest Neighbor Condensation for Large Data Sets Classification
IEEE Transactions on Knowledge and Data Engineering
Fast minimization of structural risk by nearest neighbor rule
IEEE Transactions on Neural Networks
Fast instance selection for speeding up support vector machines
Knowledge-Based Systems
Hi-index | 0.00 |
In this brief, we describe the FCNN-SVM classifier, which combines the support vector machine (SVM) approach and the fast nearest neighbor condensation classification rule (FCNN) in order to make SVMs practical on large collections of data. As a main contribution, it is experimentally shown that, on very large and multidimensional data sets, the FCNN-SVM is one or two orders of magnitude faster than SVM, and that the number of support vectors (SVs) is more than halved with respect to SVM. Thus, a drastic reduction of both training and testing time is achieved by using the FCNN-SVM. This result is obtained at the expense of a little loss of accuracy. The FCNN-SVM is proposed as a viable alternative to the standard SVM in applications where a fast response time is a fundamental requirement.