BoosTexter: A Boosting-based Systemfor Text Categorization
Machine Learning - Special issue on information retrieval
Empirical Studies on Multi-label Classification
ICTAI '06 Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence
ML-KNN: A lazy learning approach to multi-label learning
Pattern Recognition
Multi-label learning by exploiting label dependency
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Random k-Labelsets for Multilabel Classification
IEEE Transactions on Knowledge and Data Engineering
MULAN: A Java Library for Multi-Label Learning
The Journal of Machine Learning Research
Classifier chains for multi-label classification
Machine Learning
Hi-index | 0.00 |
Multi-label classifications exist in many real world applications. This paper empirically studies the performance of a variety of multi-label classification algorithms. Some of them are developed based on problem transformation. Some of them are developed based on adaption. Our experimental results show that the adaptive Multi-Label K-Nearest Neighbor performs the best, followed by Random k-Label Set, followed by Classifier Chain and Binary Relevance. Adaboost.MH performs the worst, followed by Pruned Problem Transformation. Our experimental results also provide us the confidence of existing correlations among multi-labels. These insights shed light for future research directions on multi-label classifications.