Improving the k-NCN classification rule through heuristic modifications
Pattern Recognition Letters
A class-dependent weighted dissimilarity measure for nearest neighbor classification problems
Pattern Recognition Letters
Locally Adaptive Metric Nearest-Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminant Waveletfaces and Nearest Feature Classifiers for Face Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dissimilarity representations allow for building good classifiers
Pattern Recognition Letters
Advanced Metrics for Class-Driven Similarity Search
DEXA '99 Proceedings of the 10th International Workshop on Database & Expert Systems Applications
The Dissimilarity Representation for Pattern Recognition: Foundations And Applications (Machine Perception and Artificial Intelligence)
Prototype selection for dissimilarity-based classifiers
Pattern Recognition
Improving nearest neighbor rule with a simple adaptive distance measure
Pattern Recognition Letters
Improved heterogeneous distance functions
Journal of Artificial Intelligence Research
Face recognition using the nearest feature line method
IEEE Transactions on Neural Networks
A Dynamic Programming Technique for Optimizing Dissimilarity-Based Classifiers
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Direct sparse nearest feature classifier for face recognition
LSMS/ICSEE'10 Proceedings of the 2010 international conference on Life system modeling and simulation and intelligent computing, and 2010 international conference on Intelligent computing for sustainable energy and environment: Part III
Hi-index | 0.00 |
A crucial issue in dissimilarity-based classification is the choice of the representation set. In the small sample case, classifiers capable of a good generalization and the injection or addition of extra information allow to overcome the representational limitations. In this paper, we present a new approach for enriching dissimilarity representations. It is based on the concept of feature lines and consists in deriving a generalized version of the original dissimilarity representation by using feature lines as prototypes. We use a linear normal density-based classifier and the nearest neighbor rule, as well as two different methods for selecting prototypes: random choice and a length-based selection of the feature lines. An important observation is that just a few long feature lines are needed to obtain a significant improvement in performance over the other representation sets and classifiers. In general, the experiments show that this alternative representation is especially profitable for some correlated datasets.