Instance-Based Learning Algorithms
Machine Learning
A new definition of neighborhood of a point in multi-dimensional space
Pattern Recognition Letters
Divergence Based Feature Selection for Multimodal Class Densities
IEEE Transactions on Pattern Analysis and Machine Intelligence
A sample set condensation algorithm for the class sensitive artificial neural network
Pattern Recognition Letters
Self-organizing maps
On the use of neighbourhood-based non-parametric classifiers
Pattern Recognition Letters - special issue on pattern recognition in practice V
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dissimilarity representations allow for building good classifiers
Pattern Recognition Letters
A generalized kernel approach to dissimilarity-based classification
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Dissimilarity-based classification of spectra: computational issues
Real-Time Imaging - Special issue on spectral imaging
The Dissimilarity Representation for Pattern Recognition: Foundations And Applications (Machine Perception and Artificial Intelligence)
Prototype selection for dissimilarity-based classifiers
Pattern Recognition
Finding Prototypes For Nearest Neighbor Classifiers
IEEE Transactions on Computers
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
The condensed nearest neighbor rule (Corresp.)
IEEE Transactions on Information Theory
Generative models for similarity-based classification
Pattern Recognition
A memetic algorithm for evolutionary prototype selection: A scaling up approach
Pattern Recognition
A generalization of dissimilarity representations using feature lines and feature planes
Pattern Recognition Letters
A Dynamic Programming Technique for Optimizing Dissimilarity-Based Classifiers
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Fast k most similar neighbor classifier for mixed data (tree k-MSN)
Pattern Recognition
IPADE: iterative prototype adjustment for nearest neighbor classification
IEEE Transactions on Neural Networks
On-line multi-stage sorting algorithm for agriculture products
Pattern Recognition
The dissimilarity space: Bridging structural and statistical pattern recognition
Pattern Recognition Letters
The dissimilarity representation for structural pattern recognition
CIARP'11 Proceedings of the 16th Iberoamerican Congress conference on Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
Class proximity measures - Dissimilarity-based classification and display of high-dimensional data
Journal of Biomedical Informatics
Review: Supervised classification and mathematical optimization
Computers and Operations Research
One-sided prototype selection on class imbalanced dissimilarity matrices
SSPR'12/SPR'12 Proceedings of the 2012 Joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Selecting feature lines in generalized dissimilarity representations for pattern recognition
Digital Signal Processing
MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
A novel prototype generation technique for handwriting digit recognition
Pattern Recognition
Hi-index | 0.01 |
Prototype-based classification relies on the distances between the examples to be classified and carefully chosen prototypes. A small set of prototypes is of interest to keep the computational complexity low, while maintaining high classification accuracy. An experimental study of some old and new prototype optimisation techniques is presented, in which the prototypes are either selected or generated from the given data. These condensing techniques are evaluated on real data, represented in vector spaces, by comparing their resulting reduction rates and classification performance. Usually the determination of prototypes is studied in relation with the nearest neighbour rule. We will show that the use of more general dissimilarity-based classifiers can be more beneficial. An important point in our study is that the adaptive condensing schemes here discussed allow the user to choose the number of prototypes freely according to the needs. If such techniques are combined with linear dissimilarity-based classifiers, they provide the best trade-off of small condensed sets and high classification accuracy.