Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
A parallel mixture of SVMs for very large scale problems
Neural Computation
The Bayesian Committee Support Vector Machine
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Support Vector Data Description
Machine Learning
Inverse System Identification of Nonlinear Systems Using LSSVM Based on Clustering
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
K-farthest-neighbors-based concept boundary determination for support vector data description
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Hi-index | 0.00 |
Support Vector Data Description (SVDD) has a limitation for dealing with a large data set in which computational load drastically increases as training data size becomes large. To handle this problem, we propose a new fast SVDD method using K-means clustering method. Our method uses divide-and-conquerstrategy; trains each decomposed sub-problems to get support vectors and retrains with the support vectors to find a global data description of a whole target class. The proposed method has a similar result to the original SVDD and reduces computational cost. Through experiments, we show efficiency of our method.