BoosTexter: A Boosting-based Systemfor Text Categorization
Machine Learning - Special issue on information retrieval
Multi-label informed latent semantic indexing
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Feature selection for multi-label naive Bayes classification
Information Sciences: an International Journal
Multilabel dimensionality reduction via dependence maximization
ACM Transactions on Knowledge Discovery from Data (TKDD)
Multi-label linear discriminant analysis
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part VI
Random k-Labelsets for Multilabel Classification
IEEE Transactions on Knowledge and Data Engineering
Classifier chains for multi-label classification
Machine Learning
Improving multi-label classification using semi-supervised learning and dimensionality reduction
PRICAI'12 Proceedings of the 12th Pacific Rim international conference on Trends in Artificial Intelligence
Hi-index | 0.00 |
While multi-label classification can be widely applied for problems where multiple classes can be assigned to an object, its effectiveness may be sacrificed due to curse of dimensionality in the feature space and sparseness of dimensionality in the label space. Moreover, it suffers with high computational cost when there exist a high number of dimensions, as well as with lower accuracy when there are a number of noisy examples. As a solution, this paper presents two alternative methods, namely Dependent Dual Space Reduction and Independent Dual Space Reduction, to reduce dimensions in the dual spaces, i.e., the feature and label spaces, using Singular Value Decomposition (SVD). The first approach constructs the covariance matrix to represent dependency between the features and labels, project both of them into a single reduced space, and then perform prediction on the reduced space. On the other hand, the second approach handles the feature space and the label space separately by constructing a covariance matrix for each space to represent feature dependency and label dependency before performing SVD on dependency profile of each space to reduce dimension and for noise elimination and then predicting using their reduced dimensions. A number of experiments evidence that prediction on the reduced spaces for both dependent and independent reduction approaches can obtain better classification performance as well as faster computation, compared to the prediction using the original spaces. The dependent approach helps saving computational time while the independent approach tends to obtain better classification performance.