SIAM Review
Machine Learning - Special issue on inductive transfer
A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Single-shot detection of multiple categories of text using parametric mixture models
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Convex Optimization
Learning to learn with the informative vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Multi-Task Learning for Classification with Dirichlet Process Priors
The Journal of Machine Learning Research
Multi-task learning for HIV therapy screening
Proceedings of the 25th international conference on Machine learning
Convex multi-task feature learning
Machine Learning
A convex formulation for learning shared structures from multiple tasks
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An accelerated gradient method for trace norm minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Efficient Euclidean projections in linear time
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
The Journal of Machine Learning Research
A model of inductive bias learning
Journal of Artificial Intelligence Research
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Joint covariate selection and joint subspace selection for multiple classification problems
Statistics and Computing
Bregman Divergence-Based Regularization for Transfer Subspace Learning
IEEE Transactions on Knowledge and Data Engineering
Trace Norm Regularization: Reformulations, Algorithms, and Multi-Task Learning
SIAM Journal on Optimization
Integrating low-rank and group-sparse structures for robust multi-task learning
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Content based social behavior prediction: a multi-task learning approach
Proceedings of the 20th ACM international conference on Information and knowledge management
Optimal exact least squares rank minimization
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Inductive multi-task learning with multiple view data
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Robust multi-task feature learning
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Learning high-order task relationships in multi-task learning
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Shifted subspaces tracking on sparse outlier for motion segmentation
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Multi-stage multi-task feature learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. In addition, we present two projected gradient algorithms and discuss their rates of convergence. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.