Machine Learning - Special issue on inductive transfer
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Learning to learn with the informative vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Multi-Task Learning for Classification with Dirichlet Process Priors
The Journal of Machine Learning Research
Robust multi-task learning with t-processes
Proceedings of the 24th international conference on Machine learning
Convex multi-task feature learning
Machine Learning
A convex formulation for learning shared structures from multiple tasks
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An accelerated gradient method for trace norm minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Large-scale sparse logistic regression
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Multi-task feature learning via efficient l2, 1-norm minimization
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
An efficient algorithm for a class of fused lasso problems
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning incoherent sparse and low-rank patterns from multiple tasks
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
A Singular Value Thresholding Algorithm for Matrix Completion
SIAM Journal on Optimization
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization
Foundations of Computational Mathematics
Robust multi-task feature learning
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
A fast tri-factorization method for low-rank matrix recovery and completion
Pattern Recognition
Manifold regularized multi-task learning
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Multiple task learning using iteratively reweighted least square
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Multi-task learning (MTL) aims at improving the generalization performance by utilizing the intrinsic relationships among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not be the case in many real-world applications. In this paper, we propose a robust multi-task learning (RMTL) algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks. Specifically, the proposed RMTL algorithm captures the task relationships using a low-rank structure, and simultaneously identifies the outlier tasks using a group-sparse structure. The proposed RMTL algorithm is formulated as a non-smooth convex (unconstrained) optimization problem. We propose to adopt the accelerated proximal method (APM) for solving such an optimization problem. The key component in APM is the computation of the proximal operator, which can be shown to admit an analytic solution. We also theoretically analyze the effectiveness of the RMTL algorithm. In particular, we derive a key property of the optimal solution to RMTL; moreover, based on this key property, we establish a theoretical bound for characterizing the learning performance of RMTL. Our experimental results on benchmark data sets demonstrate the effectiveness and efficiency of the proposed algorithm.