A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Numerical Recipes in C++: the art of scientific computing
Numerical Recipes in C++: the art of scientific computing
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A parallel mixture of SVMs for very large scale problems
Neural Computation
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Support Vector Machines: Training and Applications
Support Vector Machines: Training and Applications
Sparseness of support vector machines
The Journal of Machine Learning Research
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs
The Journal of Machine Learning Research
Multiclass reduced-set support vector machines
ICML '06 Proceedings of the 23rd international conference on Machine learning
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Building Support Vector Machines with Reduced Classifier Complexity
The Journal of Machine Learning Research
Incremental Support Vector Learning: Analysis, Implementation and Applications
The Journal of Machine Learning Research
Comments on the "Core Vector Machines: Fast SVM Training on Very Large Data Sets"
The Journal of Machine Learning Research
A dual coordinate descent method for large-scale linear SVM
Proceedings of the 25th international conference on Machine learning
Coordinate Descent Method for Large-scale L2-loss Linear Support Vector Machines
The Journal of Machine Learning Research
Sparse approximation through boosting for learning large scale kernel machines
IEEE Transactions on Neural Networks
Multiple incremental decremental learning of support vector machines
IEEE Transactions on Neural Networks
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
A bottom-up method for simplifying support vector solutions
IEEE Transactions on Neural Networks
Generalized Core Vector Machines
IEEE Transactions on Neural Networks
Using the leader algorithm with support vector machines for large data sets
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Hierarchical training of multiple SVMs for personalized web filtering
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Hi-index | 0.00 |
Scalability is one of the main challenges for kernel-based methods and support vector machines (SVMs). The quadratic demand in memory for storing kernel matrices makes it impossible for training on million-size data. Sophisticated decomposition algorithms have been proposed to efficiently train SVMs using only important examples, which ideally are the final support vectors (SVs). However, the ability of the decomposition method is limited to large-scale applications where the number of SVs is still too large for a computer's capacity. From another perspective, the large number of SVs slows down SVMs in the testing phase, making it impractical for many applications. In this paper, we introduce the integration of a vector combination scheme to simplify the SVM solution into an incremental working set selection for SVM training. The main objective of the integration is to maintain a minimal number of final SVs, bringing a minimum resource demand and faster training time. Consequently, the learning machines are more compact and run faster thanks to the small number of vectors included in their solution. Experimental results on large benchmark datasets shows that the proposed condensed SVMs achieve both training and testing efficiency while maintaining a generalization ability equivalent to that of normal SVMs.