Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
A tutorial on spectral clustering
Statistics and Computing
An Algorithm for Transfer Learning in a Heterogeneous Environment
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Information Theoretic Learning: Renyi's Entropy and Kernel Perspectives
Information Theoretic Learning: Renyi's Entropy and Kernel Perspectives
Toward fine-grained online task characteristics estimation in scientific workflows
WORKS '13 Proceedings of the 8th Workshop on Workflows in Support of Large-Scale Science
Hi-index | 0.00 |
Multi-task averaging deals with the problem of estimating the means of a set of distributions jointly. It has its roots in the fifties when it was observed that leveraging data from related distributions can yield superior performance over learning from each distribution independently. Stein's paradox showed that, in an average square error sense, it is better to estimate the means of T Gaussian random variables using data sampled from all of them. This phenomenon has been largely disregarded and has recently emerged again in the field of multi-task learning. In this paper, we extend recent results for multi-task averaging to the n-dimensional case and propose a method to detect from data which tasks/distributions should be considered as related. Our experimental results indicate that the proposed method compares favorably to the state of the art.