SIAM Review
Machine Learning - Special issue on inductive transfer
Digital Image Processing
A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Single-shot detection of multiple categories of text using parametric mixture models
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Task clustering and gating for bayesian multitask learning
The Journal of Machine Learning Research
Convex Optimization
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Learning to learn with the informative vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Multi-Task Learning for Classification with Dirichlet Process Priors
The Journal of Machine Learning Research
Multi-task learning for HIV therapy screening
Proceedings of the 25th international conference on Machine learning
An Improved Multi-task Learning Approach with Applications in Medical Diagnosis
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Convex multi-task feature learning
Machine Learning
A convex formulation for learning shared structures from multiple tasks
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An accelerated gradient method for trace norm minimization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Efficient Euclidean projections in linear time
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Drosophila gene expression pattern annotation using sparse features and term-term interactions
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
The Journal of Machine Learning Research
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Accelerated Gradient Method for Multi-task Sparse Learning Problem
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Joint covariate selection and joint subspace selection for multiple classification problems
Statistics and Computing
Multi-task feature learning via efficient l2, 1-norm minimization
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Bregman Divergence-Based Regularization for Transfer Subspace Learning
IEEE Transactions on Knowledge and Data Engineering
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Robust principal component analysis?
Journal of the ACM (JACM)
Trace Norm Regularization: Reformulations, Algorithms, and Multi-Task Learning
SIAM Journal on Optimization
Robust principal component analysis via capped norms
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Hi-index | 0.00 |
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multitask learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is nonconvex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose employing the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is nondifferentiable and the feasible domain is nontrivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of the projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and a Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in detail. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multitask learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multitask learning formulation and the efficiency of the proposed projected gradient algorithms.