Matrix analysis
Machine Learning - Special issue on inductive transfer
Relative Loss Bounds for Multidimensional Regression Problems
Machine Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Ultraconservative online algorithms for multiclass problems
The Journal of Machine Learning Research
Convex Optimization
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
A model of inductive bias learning
Journal of Artificial Intelligence Research
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Relative loss bounds for single neurons
IEEE Transactions on Neural Networks
Online methods for multi-domain learning and adaptation
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Multitask learning with expert advice
COLT'07 Proceedings of the 20th annual conference on Learning theory
Online learning for multi-task feature selection
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Multitask Sparsity via Maximum Entropy Discrimination
The Journal of Machine Learning Research
Fast multi-task learning for query spelling correction
Proceedings of the 21st ACM international conference on Information and knowledge management
Efficient online learning for multitask feature selection
ACM Transactions on Knowledge Discovery from Data (TKDD)
Hi-index | 0.00 |
We study the problem of online learning of multiple tasks in parallel. On each online round, the algorithm receives an instance and makes a prediction for each one of the parallel tasks. We consider the case where these tasks all contribute toward a common goal. We capture the relationship between the tasks by using a single global loss function to evaluate the quality of the multiple predictions made on each round. Specifically, each individual prediction is associated with its own individual loss, and then these loss values are combined using a global loss function. We present several families of online algorithms which can use any absolute norm as a global loss function. We prove worst-case relative loss bounds for all of our algorithms.