Bayesian regularization and pruning using a Laplace prior
Neural Computation
Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning - Special issue on inductive transfer
Convex Optimization
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Locally linear metric adaptation for semi-supervised clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Personalized handwriting recognition via biased regularization
ICML '06 Proceedings of the 23rd international conference on Machine learning
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Nonlinear adaptive distance metric learning for clustering
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Fast solvers and efficient implementations for distance metric learning
Proceedings of the 25th international conference on Machine learning
Structured metric learning for high dimensional problems
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Transferred Dimensionality Reduction
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Convex multi-task feature learning
Machine Learning
A majorization-minimization algorithm for (multiple) hyperparameter learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning instance specific distances using metric propagation
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Transfer learning via dimensionality reduction
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Domain adaptation via transfer component analysis
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Robust distance metric learning with auxiliary knowledge
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Semi-supervised distance metric learning for collaborative image retrieval and clustering
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Transfer metric learning by learning task relationships
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
IEEE Transactions on Knowledge and Data Engineering
A Kernel Approach for Semisupervised Metric Learning
IEEE Transactions on Neural Networks
Learning high-order task relationships in multi-task learning
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
Distance metric learning plays a very crucial role in many data mining algorithms because the performance of an algorithm relies heavily on choosing a good metric. However, the labeled data available in many applications is scarce, and hence the metrics learned are often unsatisfactory. In this article, we consider a transfer-learning setting in which some related source tasks with labeled data are available to help the learning of the target task. We first propose a convex formulation for multitask metric learning by modeling the task relationships in the form of a task covariance matrix. Then we regard transfer learning as a special case of multitask learning and adapt the formulation of multitask metric learning to the transfer-learning setting for our method, called transfer metric learning (TML). In TML, we learn the metric and the task covariances between the source tasks and the target task under a unified convex formulation. To solve the convex optimization problem, we use an alternating method in which each subproblem has an efficient solution. Moreover, in many applications, some unlabeled data is also available in the target task, and so we propose a semi-supervised extension of TML called STML to further improve the generalization performance by exploiting the unlabeled data based on the manifold assumption. Experimental results on some commonly used transfer-learning applications demonstrate the effectiveness of our method.