Effects of Sample Size in Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dissimilarity-based classification of spectra: computational issues
Real-Time Imaging - Special issue on spectral imaging
An assembled matrix distance metric for 2DPCA-based image recognition
Pattern Recognition Letters
The Dissimilarity Representation for Pattern Recognition: Foundations And Applications (Machine Perception and Artificial Intelligence)
The Representation of Chemical Spectral Data for Classification
CIARP '09 Proceedings of the 14th Iberoamerican Conference on Pattern Recognition: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications
The dissimilarity representation as a tool for three-way data classification: a 2D measure
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
Hi-index | 0.00 |
Classification of spectral data has raised a growing interest in may research areas. However, this type of data usually suffers from the curse of dimensionality. This causes most statistical methods and/or classifiers to not perform well. A recently proposed alternative which can help avoiding this problem is the Dissimilarity Representation, in which objects are represented by their dissimilarities to representative objects of each class. However, this approach depends on the selection of a suitable dissimilarity measure. For spectra, the incorporation of information on their shape, can be significant for a good discrimination. In this paper, we make a study on the benefit of using a measure which takes shape of spectra into account. We show that the shape-based measure not only leads to better classification results, but that a certain number of objects is enough to achieve it. The experiments are conducted on three onedimensional data sets and a two-dimensional one.