Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Road sign classification using Laplace kernel classifier
Pattern Recognition Letters - Selected papers from the 11th scandinavian conference on image analysis
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Feature Space Interpretation of SVMs with Indefinite Kernels
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
The Dissimilarity Representation for Pattern Recognition: Foundations And Applications (Machine Perception and Artificial Intelligence)
Edit distance-based kernel functions for structural pattern classification
Pattern Recognition
Letters: Kernel subclass discriminant analysis
Neurocomputing
Beyond Traditional Kernels: Classification in Two Dissimilarity-Based Representation Spaces
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
The aim of this paper is to find an answer to the question: What is the difference between dissimilarity-based classifications(DBCs) and other kernelbased classifications(KBCs)? In DBCs [11], classifiers are defined among classes; they are not based on the feature measurements of individual objects, but rather on a suitable dissimilarity measure among them. In KBCs [15], on the other hand, classifiers are designed in a high-dimensional feature space transformed from the original input feature space through kernels, such as a Mercer kernel. Thus, the difference that exists between the two approaches can be summarized as follows: The distance kernel of DBCs represents the discriminative information in a relative manner, i.e. through pairwise dissimilarity relations between two objects, while the mapping kernel of KBCs represents the discriminative information uniformly in a fixed way for all objects. In this paper, we report on an empirical evaluation of some classifiers built in the two different representation spaces: the dissimilarity space and the kernel space. Our experimental results, obtained with well-known benchmark databases, demonstrate that when the kernel parameters have not been appropriately chosen, DBCs always achieve better results than KBCs in terms of classification accuracies.