Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Frequent free tree discovery in graph data
Proceedings of the 2004 ACM symposium on Applied computing
Cyclic pattern kernels for predictive graph mining
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
Learning distance function by coding similarity
Proceedings of the 24th international conference on Machine learning
Learning to combine distances for complex representations
Proceedings of the 24th international conference on Machine learning
Modeling Transfer Relationships Between Learning Tasks for Improved Inductive Transfer
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Bridging the Gap Between Graph Edit Distance and Kernel Machines
Bridging the Gap Between Graph Edit Distance and Kernel Machines
Robust distance metric learning with auxiliary knowledge
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Maximum Common Subgraph based locally weighted regression
Proceedings of the 27th Annual ACM Symposium on Applied Computing
A structural cluster kernel for learning on graphs
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
Quantitative structure-activity relationships (QSARs) are regression models relating chemical structure to biological activity. Such models allow to make predictions for toxicologically or pharmacologically relevant endpoints, which constitute the target outcomes of trials or experiments. The task is often tackled by instance-based methods (like k-nearest neighbors), which are all based on the notion of chemical (dis) similarity. Our starting point is the observation by Raymond and Willett that the two big families of chemical distance measures, fingerprint-based and maximum common subgaph based measures, provide orthogonal information about chemical similarity. The paper presents a novel method for finding suitable combinations of them, called adapted transfer, which adapts a distance measure learned on another, related dataset to a given dataset. Adapted transfer thus combines distance learning and transfer learning in a novel manner. In a set of experiments, we compare adapted transfer with distance learning on the target dataset itself and inductive transfer without adaptations. In our experiments, we visualize the performance of the methods by learning curves (i.e., depending on training set size) and present a quantitative comparison for 10% and 100% of the maximum training set size.