Learning to combine distances for complex representations
Proceedings of the 24th international conference on Machine learning
On profiling blogs with representative entries
Proceedings of the second workshop on Analytics for noisy unstructured text data
Higher-Order Logic Recommender System
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
Clustering relational data based on randomized propositionalization
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Adaptive matching based kernels for labelled graphs
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
A new framework for dissimilarity and similarity learning
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part IV
Hi-index | 0.00 |
The main disadvantage of most existing set kernels is that they are based on averaging, which might be inappropriate for problems where only specific elements of the two sets should determine the overall similarity. In this paper we propose a class of kernels for sets of vectors directly exploiting set distance measures and, hence, incorporating various semantics into set kernels and lending the power of regularization to learning in structural domains where natural distance functions exist. These kernels belong to two groups: (i) kernels in the proximity space induced by set distances and (ii) set distance substitution kernels (non-PSD in general). We report experimental results which show that our kernels compare favorably with kernels based on averaging and achieve results similar to other state-of-the-art methods. At the same time our kernels systematically improve over the naive way of exploiting distances.