The nature of statistical learning theory
The nature of statistical learning theory
Support Vector Data Description
Machine Learning
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Domain described support vector classifier for multi-classification problems
Pattern Recognition
Bayes classification based on minimum bounding spheres
Neurocomputing
A novel fuzzy compensation multi-class support vector machine
Applied Intelligence
Sphere-structured support vector machines for multi-class pattern recognition
RSFDGrC'03 Proceedings of the 9th international conference on Rough sets, fuzzy sets, data mining, and granular computing
Robust support vector machine with bullet hole image classification
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
The possibilistic C-means algorithm: insights and recommendations
IEEE Transactions on Fuzzy Systems
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
On the convergence of some possibilistic clustering algorithms
Fuzzy Optimization and Decision Making
Hi-index | 0.00 |
In this paper, we have focused on the use of the support vector data description based on kernel-based possibilistic c-means algorithm (PCM) for solving multi-class classification problems. We propose a weighted support vector data description (SVDD) multi-class classification method, which can be used to deal with the outlier sensitivity problem in traditional multi-class classification problems. The proposed method is the robust version of SVDD by assigning a weight to each data point, which represents fuzzy membership degree of the cluster computed by the kernel-based PCM method. Accordingly, this paper presents the multi classification algorithm and gives the simple classification rule, which satisfies Bayesian optimal decision theory. With a simple classification rule, our experimental results show that the proposed method can reduce the effect of outliers and reduce the rate of classification error.