Machine Learning - Special issue on inductive transfer
Iterative Methods for Sparse Linear Systems
Iterative Methods for Sparse Linear Systems
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Learning to learn with the informative vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Multi-Task Learning for Classification with Dirichlet Process Priors
The Journal of Machine Learning Research
Multi-task learning for HIV therapy screening
Proceedings of the 25th international conference on Machine learning
Convex multi-task feature learning
Machine Learning
A convex formulation for learning shared structures from multiple tasks
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Multi-task feature learning via efficient l2, 1-norm minimization
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Transfer metric learning by learning task relationships
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Integrating low-rank and group-sparse structures for robust multi-task learning
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
A multi-task learning formulation for predicting disease progression
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Trace Norm Regularization: Reformulations, Algorithms, and Multi-Task Learning
SIAM Journal on Optimization
An affine scaling methodology for best basis selection
IEEE Transactions on Signal Processing
Subset selection in noise based on diversity measure minimization
IEEE Transactions on Signal Processing
Robust multi-task feature learning
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
Multiple task learning (MTL) is becoming popular due to its theoretical advances and empirical successes. The key idea of MTL is to explore the hidden relationships among multiple tasks to enhance learning performance. Recently, many MTL algorithms have been developed and applied to various problems such as feature selection and kernel learning. However, most existing methods highly relied on certain assumptions of the task relationships. For instance, several works assumed that there is a major task group and several outlier tasks, and used a decomposition approach to identify the group structure and outlier tasks simultaneously. In this paper, we adopt a more general formulation for MTL without making specific structure assumptions. Instead of performing model decomposition, we directly impose an elastic-net regularization with a mixture of the structure and outlier penalties and formulate the objective as an unconstrained convex problem. To derive the optimal solution efficiently, we propose to use an Iteratively Reweighted Least Square (IRLS) method with a preconditioned conjugate gradient, which is computationally affordable for high dimensional data. Extensive experiments are conducted over both synthetic and real data, and comparisons with several state-of-the-art algorithms clearly show the superior performance of the proposed method.