Machine Learning
Machine Learning - Special issue on inductive transfer
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
An introduction to variable and feature selection
The Journal of Machine Learning Research
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Constructing informative priors using transfer learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
The Journal of Machine Learning Research
Bounds for Linear Multi-Task Learning
The Journal of Machine Learning Research
Kernel-Based Inductive Transfer
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Convex multi-task feature learning
Machine Learning
Cross domain distribution adaptation via kernel mapping
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Universal Learning over Related Distributions and Adaptive Graph Transduction
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
A model of inductive bias learning
Journal of Artificial Intelligence Research
IEEE Transactions on Knowledge and Data Engineering
Hi-index | 0.00 |
The success of regularized risk minimization approaches to classification with linear models depends crucially on the selection of a regularization term that matches with the learning task at hand. If the necessary domain expertise is rare or hard to formalize, it may be difficult to find a good regularizer. On the other hand, if plenty of related or similar data is available, it is a natural approach to adjust the regularizer for the new learning problem based on the characteristics of the related data. In this paper, we study the problem of obtaining good parameter values for a l2-style regularizer with feature weights. We analytically investigate a moment-based method to obtain good values and give uniform convergence bounds for the prediction error on the target learning task. An empirical study shows that the approach can improve predictive accuracy considerably in the application domain of text classification.