An efficient multi-label support vector machine with a zero label
Expert Systems with Applications: An International Journal
Exploiting label dependencies for improved sample complexity
Machine Learning
Hi-index | 0.00 |
For multi-label classification, problem transform algorithms have received more attention due to their good performance and low computational complexity. But how to speed up training and test procedures is still a challenging issue. In this paper, one-by-one data decomposition trick is adopted to divide a k-label problem into k sub-problems, where a specific sub-problem only consists of instances with a specific class. We train each sub-classifier using support vector data description that learns a smallest hyper-sphere to capture the majority of training instances of each class, and integrate k sub-classifiers into an entire multi-label classification algorithm using both pseudo posterior probabilities and linear ridge regression. Our new method has the lowest time complexity, compared with existing problem transform support vector machines for multi-label classification. Experimental results on the Yeast dataset illustrate that our algorithm works better than several state-of-the-art ones.