Machine Learning
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning - Special issue on inductive transfer
Learning to learn
Kernel principal component analysis
Advances in kernel methods
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Bayesian parameter estimation via variational methods
Statistics and Computing
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Empirical Bayes for Learning to Learn
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Feature Selection and Dualities in Maximum Entropy Discrimination
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Learning the Kernel Matrix with Semi-Definite Programming
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Machine Learning: Discriminative and Generative (Kluwer International Series in Engineering and Computer Science)
Convex Optimization
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Multi-task feature and kernel selection for SVMs
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Training linear SVMs in linear time
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Bounds for Linear Multi-Task Learning
The Journal of Machine Learning Research
Multi-Task Learning for Classification with Dirichlet Process Priors
The Journal of Machine Learning Research
Hierarchical maximum entropy density estimation
Proceedings of the 24th international conference on Machine learning
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
SVM optimization: inverse dependence on training set size
Proceedings of the 25th international conference on Machine learning
Laplace maximum margin Markov networks
Proceedings of the 25th international conference on Machine learning
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
Convex multi-task feature learning
Machine Learning
Transfer bounds for linear feature learning
Machine Learning
A model of inductive bias learning
Journal of Artificial Intelligence Research
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Joint covariate selection and joint subspace selection for multiple classification problems
Statistics and Computing
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Just relax: convex programming methods for identifying sparse signals in noise
IEEE Transactions on Information Theory
Semi-supervised multitask learning via self-training and maximum entropy discrimination
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Multi-view maximum entropy discrimination
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
A multitask learning framework is developed for discriminative classification and regression where multiple large-margin linear classifiers are estimated for different prediction problems. These classifiers operate in a common input space but are coupled as they recover an unknown shared representation. A maximum entropy discrimination (MED) framework is used to derive the multitask algorithm which involves only convex optimization problems that are straightforward to implement. Three multitask scenarios are described. The first multitask method produces multiple support vector machines that learn a shared sparse feature selection over the input space. The second multitask method produces multiple support vector machines that learn a shared conic kernel combination. The third multitask method produces a pooled classifier as well as adaptively specialized individual classifiers. Furthermore, extensions to regression, graphical model structure estimation and other sparse methods are discussed. The maximum entropy optimization problems are implemented via a sequential quadratic programming method which leverages recent progress in fast SVM solvers. Fast monotonic convergence bounds are provided by bounding the MED sparsifying cost function with a quadratic function and ensuring only a constant factor runtime increase above standard independent SVM solvers. Results are shown on multitask data sets and favor multitask learning over single-task or tabula rasa methods.