Making large-scale support vector machine learning practical
Advances in kernel methods
Pump Failure Detection Using Support Vector Data Descriptions
IDA '99 Proceedings of the Third International Symposium on Advances in Intelligent Data Analysis
On the Learnability and Design of Output Codes for Multiclass Problems
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
A New Multi-class SVM Algorithm Based on One-Class SVM
ICCS '07 Proceedings of the 7th international conference on Computational Science, Part III: ICCS 2007
A new multi-class support vector machine with multi-sphere in the feature space
IEA/AIE'07 Proceedings of the 20th international conference on Industrial, engineering, and other applications of applied intelligent systems
A fast iterative nearest point algorithm for support vector machine classifier design
IEEE Transactions on Neural Networks
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
A geometric approach to Support Vector Machine (SVM) classification
IEEE Transactions on Neural Networks
Hi-index | 0.10 |
Many applications require the ability to identify data that is anomalous with respect to a target group of observations. To tackle this problem, one possible approach is to use one class classification methods because of their ability to reject outliers. However, it is also possible to use standard multiclass classification by exploiting the negative data to infer a description of the target class. In this paper, we propose a modified Maximum Margin One Class SVM method as a discriminative framework to deal with multiclass problems. To this end, we present Maximum Margin One Class SVM coupled with the constraints of binary SVM detection (OC"2). For each class, we aim to define a closed boundary around the target class such that the corresponding domain includes the target class elements as much as possible, while it minimizes the chance of accepting outliers and objects from the other classes. Because of the closure of decision boundaries, our method also allows detection of data belonging to potentially new clusters. Within the framework of decomposition methods and to deal with large data sets, we introduce a fast algorithm for optimizing OC"2 which uses an efficient heuristic for selecting the working set. Our gradient based algorithm relies on the analytical recursive computation of the objective function, the gradient and the solution. The optimal step size is also obtained analytically. This algorithm is tested on simulated and benchmark data.