Reducing bias and inefficiency in the selection algorithm
Proceedings of the Second International Conference on Genetic Algorithms on Genetic algorithms and their application
Machine Learning
Machine Learning
Mustererkennung 1998, 20. DAGM-Symposium
A Large Scale Clustering Scheme for Kernel K-Means
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 4 - Volume 4
Sparseness of support vector machines
The Journal of Machine Learning Research
Maximum-Gain Working Set Selection for SVMs
The Journal of Machine Learning Research
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Mercer kernel-based clustering in feature space
IEEE Transactions on Neural Networks
The Journal of Machine Learning Research
Dlib-ml: A Machine Learning Toolkit
The Journal of Machine Learning Research
Hi-index | 0.00 |
Trained support vector machines (SVMs) have a slow runtime classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient descent using the improved Rprop algorithm increases the robustness of the method compared to fixed-point iteration. Three different heuristics for selecting the support vectors to be used in the construction of the sparse approximation are proposed. It turns out that none is superior to random selection. The effect of a finishing gradient descent on all parameters of the sparse approximation is studied.