Exploiting generative models in discriminative classifiers
Proceedings of the 1998 conference on Advances in neural information processing systems II
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Edit distance-based kernel functions for structural pattern classification
Pattern Recognition
2D Shape Classification Using Multifractional Brownian Motion
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Component-based discriminative classification for hidden Markov models
Pattern Recognition
Nonextensive Information Theoretic Kernels on Measures
The Journal of Machine Learning Research
Clustering-Based Construction of Hidden Markov Models for Generative Kernels
EMMCVPR '09 Proceedings of the 7th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition
A New Generative Feature Set Based on Entropy Distance for Discriminative Classification
ICIAP '09 Proceedings of the 15th International Conference on Image Analysis and Processing
Similarity-based classification of sequences using hidden Markov models
Pattern Recognition
Face recognition based on multi-class mapping of Fisher scores
Pattern Recognition
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
Renal cancer cell classification using generative embeddings and information theoretic kernels
PRIB'11 Proceedings of the 6th IAPR international conference on Pattern recognition in bioinformatics
Hi-index | 0.00 |
Many approaches to learning classifiers for structured objects (e.g., shapes) use generative models in a Bayesian framework. However, state-of-the-art classifiers for vectorial data (e.g., support vector machines) are learned discriminatively. A generative embedding is a mapping from the object space into a fixed dimensional feature space, induced by a generative model which is usually learned from data. The fixed dimensionality of these feature spaces permits the use of state of the art discriminative machines based on vectorial representations, thus bringing together the best of the discriminative and generative paradigms. Using a generative embedding involves two steps: (i) defining and learning the generative model used to build the embedding; (ii) discriminatively learning a (maybe kernel) classifier on the adopted feature space. The literature on generative embeddings is essentially focused on step (i), usually adopting some standard off-the-shelf tool (e.g., an SVM with a linear or RBF kernel) for step (ii). In this paper, we follow a different route, by combining several HiddenMarkov Models-based generative embeddings (including the classical Fisher score) with the recently proposed non-extensive information theoretic kernels. We test this methodology on a 2D shape recognition task, showing that the proposed method is competitive with the state-of-art.