ACM Computing Surveys (CSUR)
BoosTexter: A Boosting-based Systemfor Text Categorization
Machine Learning - Special issue on information retrieval
Multi-labelled classification using maximum entropy method
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Collective multi-label classification
Proceedings of the 14th ACM international conference on Information and knowledge management
Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization
IEEE Transactions on Knowledge and Data Engineering
ML-KNN: A lazy learning approach to multi-label learning
Pattern Recognition
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Model-shared subspace boosting for multi-label classification
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Extracting shared subspace for multi-label classification
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Multilabel classification via calibrated label ranking
Machine Learning
Random k-Labelsets: An Ensemble Method for Multilabel Classification
ECML '07 Proceedings of the 18th European conference on Machine Learning
Classifier Chains for Multi-label Classification
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Multi-label learning by exploiting label dependency
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Pattern Recognition Letters
Hi-index | 0.00 |
Multi-label learning deals with the problem where each training example is represented by a single instance while associated with a set of class labels. For an unseen example, existing approaches choose to determine the membership of each possible class label to it based on identical feature set, i.e. the very instance representation of the unseen example is employed in the discrimination processes of all labels. However, this commonly-used strategy might be suboptimal as different class labels usually carry specific characteristics of their own, and it could be beneficial to exploit different feature sets for the discrimination of different labels. Based on the above reflection, we propose a new strategy to multi-label learning by leveraging label-specific features, where a simple yet effective algorithm named LIFT is presented. Briefly, LIFT constructs features specific to each label by conducting clustering analysis on its positive and negative instances, and then performs training and testing by querying the clustering results. Extensive experiments across sixteen diversified data sets clearly validate the superiority of LIFT against other well-established multi-label learning algorithms.