Support vector domain description
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Kernel Whitening for One-Class Classification
SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines
A Survey of Outlier Detection Methodologies
Artificial Intelligence Review
Kernel PCA for novelty detection
Pattern Recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Intrusion Detection by Ellipsoid Boundary
Journal of Network and Systems Management
On the regularization path of the support vector domain description
Pattern Recognition Letters
Two-class support vector data description
Pattern Recognition
New multi-class classification method based on the SVDD model
ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part II
Robust data clustering by learning multi-metric Lq-norm distances
Expert Systems with Applications: An International Journal
L1 norm based KPCA for novelty detection
Pattern Recognition
Dual support vector domain description for imbalanced classification
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part I
One-class classification with Gaussian processes
Pattern Recognition
Density weighted support vector data description
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
The support vector domain description (SVDD) is a popular kernel method for outlier detection, which tries to fit a class of data with a sphere and uses a few target objects to support its decision boundary. The problem is that even with a flexible Gaussian kernel function, the SVDD could sometimes generate such a loose decision boundary that the discrimination ability becomes poor. Therefore, a computationally intensive procedure called kernel whitening is often required to improve the performance. In this paper, we propose a simple post-processing method which tries to modify the SVDD boundary in order to achieve a tight data description with no need of kernel whitening. With the derivation of the distance between an object and its nearest boundary point in input space, the proposed method can efficiently construct a new decision boundary based on the SVDD boundary. The improvement from the proposed method is demonstrated with synthetic and real-world datasets. The results show that the proposed decision boundary can fit the shape of synthetic data distribution closely and achieves better or comparable classification performance on real-world datasets.