An empirical comparison of Kernel-based and dissimilarity-based feature spaces
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
Regularising the Ricci flow embedding
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
Feature-based dissimilarity space classification
ICPR'10 Proceedings of the 20th International conference on Recognizing patterns in signals, speech, images, and videos
The dissimilarity space: Bridging structural and statistical pattern recognition
Pattern Recognition Letters
The dissimilarity representation for structural pattern recognition
CIARP'11 Proceedings of the 16th Iberoamerican Congress conference on Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
Data analysis of (non-)metric proximities at linear costs
SIMBAD'13 Proceedings of the Second international conference on Similarity-Based Pattern Recognition
On the informativeness of asymmetric dissimilarities
SIMBAD'13 Proceedings of the Second international conference on Similarity-Based Pattern Recognition
A novel prototype generation technique for handwriting digit recognition
Pattern Recognition
Hi-index | 0.00 |
Proximity captures the degree of similarity between examples and is thereby fundamental in learning. Learning from pairwise proximity data usually relies on either kernel methods for specifically designed kernels or the nearest neighbor (NN) rule. Kernel methods are powerful, but often cannot handle arbitrary proximities without necessary corrections. The NN rule can work well in such cases, but suffers from local decisions. The aim of this paper is to provide an indispensable explanation and insights about two simple yet powerful alternatives when neither conventional kernel methods nor the NN rule can perform best. These strategies use two proximity-based representation spaces (RSs) in which accurate classifiers are trained on all training objects and demand comparisons to a small set of prototypes. They can handle all meaningful dissimilarity measures, including non-Euclidean and nonmetric ones. Practical examples illustrate that these RSs can be highly advantageous in supervised learning. Simple classifiers built there tend to outperform the NN rule. Moreover, computational complexity may be controlled. Consequently, these approaches offer an appealing alternative to learn from proximity data for which kernel methods cannot directly be applied, are too costly or impractical, while the NN rule leads to noisy results.