IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature Space Interpretation of SVMs with Indefinite Kernels
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminative learning for differing training and test distributions
Proceedings of the 24th international conference on Machine learning
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Image and video indexing using networks of operators
Journal on Image and Video Processing
Learning probabilistic models of tree edit distance
Pattern Recognition
A theory of learning with similarity functions
Machine Learning
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Domain adaptation with structural correspondence learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Domain adaptation via transfer component analysis
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
A survey of graph edit distance
Pattern Analysis & Applications
A theory of learning from different domains
Machine Learning
Domain Adaptation Problems: A DASVM Classification Technique and a Circular Validation Strategy
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Knowledge and Data Engineering
Cross validation framework to choose amongst models and datasets for transfer learning
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
Hi-index | 0.00 |
Similarity functions are widely used in many machine learning or pattern recognition tasks. We consider here a recent framework for binary classification, proposed by Balcan et al., allowing to learn in a potentially non geometrical space based on good similarity functions. This framework is a generalization of the notion of kernels used in support vector machines in the sense that allows one to use similarity functions that do not need to be positive semi-definite nor symmetric. The similarities are then used to define an explicit projection space where a linear classifier with good generalization properties can be learned. In this paper, we propose to study experimentally the usefulness of similarity based projection spaces for transfer learning issues. More precisely, we consider the problem of domain adaptation where the distributions generating learning data and test data are somewhat different. We stand in the case where no information on the test labels is available. We show that a simple renormalization of a good similarity function taking into account the test data allows us to learn classifiers more performing on the target distribution for difficult adaptation problems. Moreover, this normalization always helps to improve the model when we try to regularize the similarity based projection space in order to move closer the two distributions. We provide experiments on a toy problem and on a real image annotation task.