A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Efficient Feature Selection via Analysis of Relevance and Redundancy
The Journal of Machine Learning Research
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Trust Region Newton Method for Logistic Regression
The Journal of Machine Learning Research
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
M3MIML: A Maximum Margin Method for Multi-instance Multi-label Learning
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Feature selection for multi-label naive Bayes classification
Information Sciences: an International Journal
Classifier Chains for Multi-label Classification
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Multi-label Feature Selection for Graph Classification
ICDM '10 Proceedings of the 2010 IEEE International Conference on Data Mining
A Comparison of Multi-label Feature Selection Methods using the Problem Transformation Approach
Electronic Notes in Theoretical Computer Science (ENTCS)
Hi-index | 0.00 |
Multilabel was introduced as an extension of multi-class classification to cope with complex learning tasks in different application fields as text categorization, video o music tagging or bio-medical labeling of gene functions or diseases. The aim is to predict a set of classes (called labels in this context) instead of a single one. In this paper we deal with the problem of feature selection in multilabel classification. We use a graphical model to represent the relationships among labels and features. The topology of the graph can be characterized in terms of relevance in the sense used in feature selection tasks. In this framework, we compare two strategies implemented with different multilabel learners. The strategy that considers simultaneously the set of all labels outperforms the method that considers each label separately.