Learning Gaussian processes from multiple tasks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
A Unifying View of Sparse Approximate Gaussian Process Regression
The Journal of Machine Learning Research
A reproducing kernel Hilbert space framework for pairwise time series distances
Proceedings of the 25th international conference on Machine learning
Brief paper: Fast algorithms for nonparametric population modeling of large data sets
Automatica (Journal of IFAC)
Bayesian Online Multitask Learning of Gaussian Processes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shift-invariant grouped multi-task learning for Gaussian processes
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Gaussian Processes for Machine Learning (GPML) Toolbox
The Journal of Machine Learning Research
Computationally Efficient Convolved Multiple Output Gaussian Processes
The Journal of Machine Learning Research
Hi-index | 0.00 |
Multi-task learning models using Gaussian processes (GP) have been recently developed and successfully applied in various applications. The main difficulty with this approach is the computational cost of inference using the union of examples from all tasks. The paper investigates this problem for the grouped mixed-effect GP model where each individual response is given by a fixed-effect, taken from one of a set of unknown groups, plus a random individual effect function that captures variations among individuals. Such models have been widely used in previous work but no sparse solutions have been developed. The paper presents the first sparse solution for such problems, showing how the sparse approximation can be obtained by maximizing a variational lower bound on the marginal likelihood, generalizing ideas from single-task Gaussian processes to handle the mixed-effect model as well as grouping. Experiments using artificial and real data validate the approach showing that it can recover the performance of inference with the full sample, that it outperforms baseline methods, and that it outperforms state of the art sparse solutions for other multi-task GP formulations.