Support vector domain description
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models
Learning and Soft Computing: Support Vector Machines, Neural Networks, and Fuzzy Logic Models
Support Vector Machines: Training and Applications
Support Vector Machines: Training and Applications
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Generalization Bounds for the Area Under the ROC Curve
The Journal of Machine Learning Research
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Consistency and Convergence Rates of One-Class SVMs and Related Algorithms
The Journal of Machine Learning Research
Considering Cost Asymmetry in Learning Classifiers
The Journal of Machine Learning Research
Annotated Minimum Volume Sets for Nonparametric Anomaly Discovery
SSP '07 Proceedings of the 2007 IEEE/SP 14th Workshop on Statistical Signal Processing
A Neyman-Pearson approach to statistical learning
IEEE Transactions on Information Theory
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
L1 norm based KPCA for novelty detection
Pattern Recognition
Hi-index | 35.68 |
One-class and cost-sensitive support vector machines (SVMs) are state-of-the-art machine learning methods for estimating density level sets and solving weighted classification problems, respectively. However, the solutions of these SVMs do not necessarily produce set estimates that are nested as the parameters controlling the density level or cost-asymmetry are continuously varied. Such nesting not only reflects the true sets being estimated, but is also desirable for applications requiring the simultaneous estimation of multiple sets, including clustering, anomaly detection, and ranking. We propose new quadratic programs whose solutions give rise to nested versions of one-class and cost-sensitive SVMs. Furthermore, like conventional SVMs, the solution paths in our construction are piecewise linear in the control parameters, although here the number of breakpoints is directly controlled by the user. We also describe decomposition algorithms to solve the quadratic programs. These methods are compared to conventional (non-nested) SVMs on synthetic and benchmark data sets, and are shown to exhibit more stable rankings and decreased sensitivity to parameter settings.