C4.5: programs for machine learning
C4.5: programs for machine learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning - Special issue on inductive transfer
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Support vector machines classification with a very large-scale taxonomy
ACM SIGKDD Explorations Newsletter - Natural language processing and text mining
Multi-Task Learning for Classification with Dirichlet Process Priors
The Journal of Machine Learning Research
Boosting for transfer learning
Proceedings of the 24th international conference on Machine learning
Multi-modality in one-class classification
Proceedings of the 19th international conference on World wide web
Multi-task learning for boosting with application to web search ranking
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
IEEE Transactions on Knowledge and Data Engineering
Kernel-Based Hybrid Random Fields for Nonparametric Density Estimation
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Boosting Multi-Task Weak Learners with Applications to Textual and Social Data
ICMLA '10 Proceedings of the 2010 Ninth International Conference on Machine Learning and Applications
Entropy and Information Theory
Entropy and Information Theory
A mailbox search engine using query multi-modal expansion and community-based smoothing
ECIR'12 Proceedings of the 34th European conference on Advances in Information Retrieval
Hi-index | 0.00 |
We address the problem of multi-task learning with no label correspondence among tasks. Learning multiple related tasks simultaneously, by exploiting their shared knowledge can improve the predictive performance on every task. We develop the multi-task Adaboost environment with Multi-Task Decision Trees as weak classifiers. We first adapt the well known decision tree learning to the multi-task setting. We revise the information gain rule for learning decision trees in the multi-task setting. We use this feature to develop a novel criterion for learning Multi-Task Decision Trees. The criterion guides the tree construction by learning the decision rules from data of different tasks, and representing different degrees of task relatedness. We then modify MT-Adaboost to combine Multi-task Decision Trees as weak learners. We experimentally validate the advantage of the new technique; we report results of experiments conducted on several multi-task datasets, including the Enron email set and Spam Filtering collection.