Support vector domain description
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
Computing location depth and regression depth in higher dimensions
Statistics and Computing
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Working Set Selection Using Second Order Information for Training Support Vector Machines
The Journal of Machine Learning Research
Neighborhood Property--Based Pattern Selection for Support Vector Machines
Neural Computation
The Journal of Machine Learning Research
One-Class Novelty Detection for Seizure Analysis from Intracranial EEG
The Journal of Machine Learning Research
Reducing examples to accelerate support vector regression
Pattern Recognition Letters
High-order Markov kernels for intrusion detection
Neurocomputing
Efficient nearest neighbor query based on extended B+-tree in high-dimensional space
Pattern Recognition Letters
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Training data selection for support vector machines
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Using One-Class SVMs and Wavelets for Audio Surveillance
IEEE Transactions on Information Forensics and Security
k-nearest-neighbor Bayes-risk estimation
IEEE Transactions on Information Theory
Support vector machines training data selection using a genetic algorithm
SSPR'12/SPR'12 Proceedings of the 2012 Joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Fast classification for large data sets via random selection clustering and Support Vector Machines
Intelligent Data Analysis
Hi-index | 0.10 |
This paper proposes a training points selection method for one-class support vector machines. It exploits the feature of a trained one-class SVM, which uses points only residing on the exterior region of data distribution as support vectors. Thus, the proposed training set reduction method selects the so-called extreme points which sit on the boundary of data distribution, through local geometry and k-nearest neighbours. Experimental results demonstrate that the proposed method can reduce training set considerably, while the obtained model maintains generalization capability to the level of a model trained on the full training set, but uses less support vectors and exhibits faster training speed.