How Can Computer Science Contribute to Knowledge Discovery?
SOFSEM '01 Proceedings of the 28th Conference on Current Trends in Theory and Practice of Informatics Piestany: Theory and Practice of Informatics
SVM in oracle database 10g: removing the barriers to widespread adoption of support vector machines
VLDB '05 Proceedings of the 31st international conference on Very large data bases
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
An Efficient Implementation of an Active Set Method for SVMs
The Journal of Machine Learning Research
Hierarchical clustering support vector machines for classifying type-2 diabetes patients
ISBRA'08 Proceedings of the 4th international conference on Bioinformatics research and applications
Support vector machine classification based on fuzzy clustering for large data sets
MICAI'06 Proceedings of the 5th Mexican international conference on Artificial Intelligence
Smooth boosting using an information-based criterion
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Parallel randomized support vector machine
PAKDD'06 Proceedings of the 10th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
ICCS'06 Proceedings of the 6th international conference on Computational Science - Volume Part II
A hybrid method for speeding SVM training
NGITS'06 Proceedings of the 6th international conference on Next Generation Information Technologies and Systems
Hi-index | 0.00 |
Support Vector Machines are a family of data analysis algorithms, based on convex Quadratic Programming. We focus on their use for classification that case the SVM algorithms work by maximizing the margin of a classifying hyperplane in a feature space. The feature space is handled by means of kernels f the problems are formulated in dual form. Random Sampling techniques successfully used for similar problems are studied here. The main contribute onis a random zed algorithm for training SVMs for which we can formally prove an upper bound on the expected running time that is quasilinear on the number of data points. To ourknowledge, this is the first algorithm for training SVMs in dual formulation and with kernels for which such a quasi-linear time bound has been formally proved.