Exploiting generative models in discriminative classifiers
Proceedings of the 1998 conference on Advances in neural information processing systems II
Unsupervised learning by probabilistic latent semantic analysis
Machine Learning
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
A new discriminative kernel from probabilistic models
Neural Computation
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
The Journal of Machine Learning Research
The Latent Process Decomposition of cDNA Microarray Data Sets
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
The Journal of Machine Learning Research
Principled Hybrids of Generative and Discriminative Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Learning Joint Top-Down and Bottom-up Processes for 3D Visual Inference
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Representing shape with a spatial pyramid kernel
Proceedings of the 6th ACM international conference on Image and video retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Microarray Classification from Several Two-Gene Expression Comparisons
ICMLA '08 Proceedings of the 2008 Seventh International Conference on Machine Learning and Applications
Component-based discriminative classification for hidden Markov models
Pattern Recognition
Nonextensive Information Theoretic Kernels on Measures
The Journal of Machine Learning Research
PLSI: The True Fisher Kernel and beyond
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Ensemble gene selection by grouping for microarray data classification
Journal of Biomedical Informatics
A concise and provably informative multi-scale signature based on heat diffusion
SGP '09 Proceedings of the Symposium on Geometry Processing
Similarity-based classification of sequences using hidden Markov models
Pattern Recognition
Expression microarray classification using topic models
Proceedings of the 2010 ACM Symposium on Applied Computing
Computational TMA analysis and cell nucleus classification of renal cell carcinoma
Proceedings of the 32nd DAGM conference on Pattern recognition
Brain morphometry by probabilistic latent semantic analysis
MICCAI'10 Proceedings of the 13th international conference on Medical image computing and computer-assisted intervention: Part II
A new shape diffusion descriptor for brain classification
MICCAI'11 Proceedings of the 14th international conference on Medical image computing and computer-assisted intervention - Volume Part II
Hybrid generative-discriminative nucleus classification of renal cell carcinoma
SIMBAD'11 Proceedings of the First international conference on Similarity-based pattern recognition
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
Hybrid generative-discriminative classification using posterior divergence
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Exploiting geometry in counting grids
SIMBAD'13 Proceedings of the Second international conference on Similarity-Based Pattern Recognition
Hi-index | 0.01 |
Classical approaches to learn classifiers for structured objects (e.g., images, sequences) use generative models in a standard Bayesian framework. To exploit the state-of-the-art performance of discriminative learning, while also taking advantage of generative models of the data, generative embeddings have been recently proposed as a way of building hybrid discriminative/generative approaches. A generative embedding is a mapping, induced by a generative model (usually learned from data), from the object space into a fixed dimensional space, adequate for discriminative classifier learning. Generative embeddings have been shown to often outperform the classifiers obtained directly from the generative models upon which they are built. Using a generative embedding for classification involves two main steps: (i) defining and learning a generative model and using it to build the embedding; (ii) discriminatively learning a (maybe kernel) classifier with the embedded data. The literature on generative embeddings is essentially focused on step (i), usually taking some standard off-the-shelf tool for step (ii). Here, we adopt a different approach, by focusing also on the discriminative learning step. In particular, we exploit the probabilistic nature of generative embeddings, by using kernels defined on probability measures; in particular we investigate the use of a recent family of non-extensive information theoretic kernels on the top of different generative embeddings. We show, in different medical applications that the approach yields state-of-the-art performance.