Matrix analysis
Path-following methods for linear programming
SIAM Review
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
Proximal support vector machine classifiers
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Interior-Point Methods for Massive Support Vector Machines
SIAM Journal on Optimization
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
A parallel solver for large quadratic programs in training support vector machines
Parallel Computing - Special issue: Parallel computing in numerical optimization
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Classifying large data sets using SVMs with hierarchical clusters
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Convex Optimization
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
Fast support vector machine training and classification on graphics processors
Proceedings of the 25th international conference on Machine learning
Fast support vector machines for continuous data
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on cybernetics and cognitive informatics
Can we analyze big data inside a DBMS?
Proceedings of the sixteenth international workshop on Data warehousing and OLAP
Hi-index | 0.00 |
Traditional decomposition-based solutions to Support Vector Machines (SVMs) suffer from the widely-known scalability problem. For example, given a one-million training set, it takes about six days for SVMLight to run on a Pentium-4 sever with 8G-byte memory. In this paper, we propose an incremental algorithm, which performs approximate matrix-factorization operations, to speed up SVMs. Two approximate factorization schemes, Kronecker and incomplete Cholesky, are utilized in the primal-dual interior-point method (IPM) to directly solve the quadratic optimization problem in SVMs. We found out that a coarse approximate algorithm enjoys good speedup performance but may suffer from poor training accuracy. Conversely, a fine-grained approximate algorithm enjoys good training quality but may suffer from long training time. We subsequently propose an incremental training algorithm, which uses the approximate IPM solution of a coarse factorization to initialize the IPM of a fine-grained factorization. Extensive empirical studies show that our proposed incremental algorithm with approximate factorizations substantially speeds up SVM training while maintaining high training accuracy. In addition, we show that our proposed algorithm is highly parallelizable on an Intel dual-coreprocessor.