Shift-invariant grouped multi-task learning for Gaussian processes
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
One-shot learning of object categories using dependent Gaussian processes
Proceedings of the 32nd DAGM conference on Pattern recognition
A literature review and classification of recommender systems research
Expert Systems with Applications: An International Journal
Accurate Prediction of Coronary Artery Disease Using Reliable Diagnosis System
Journal of Medical Systems
Sparse gaussian processes for multi-task learning
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Learning output kernels for multi-task problems
Neurocomputing
Hi-index | 0.15 |
Standard single-task kernel methods have recently been extended to the case of multitask learning in the context of regularization theory. There are experimental results, especially in biomedicine, showing the benefit of the multitask approach compared to the single-task one. However, a possible drawback is computational complexity. For instance, when regularization networks are used, complexity scales as the cube of the overall number of training data, which may be large when several tasks are involved. The aim of this paper is to derive an efficient computational scheme for an important class of multitask kernels. More precisely, a quadratic loss is assumed and each task consists of the sum of a common term and a task-specific one. Within a Bayesian setting, a recursive online algorithm is obtained, which updates both estimates and confidence intervals as new data become available. The algorithm is tested on two simulated problems and a real data set relative to xenobiotics administration in human patients.