Fast Approximate Energy Minimization via Graph Cuts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Support Vector Machine with Local Summation Kernel for Robust Face Recognition
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Principled Hybrids of Generative and Discriminative Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Contrastive estimation: training log-linear models on unlabeled data
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.10 |
Generative models for vision and pattern recognition have been overshadowed in recent years by powerful non-parametric discriminative models. These discriminative models can learn arbitrary decision boundaries between classes and have proved very effective in classification and detection problems. However, unlike generative models, they do not lend themselves naturally to more general vision tasks such as rendering novel images, de-noising, and in-painting. In this paper we introduce Complementary Kernel Density Estimation (CKDE), a new generative model that adopts many of the features of non-parametric discriminative models: (1) CKDE allows complex decision surfaces and arbitrary class conditional distributions to be learned, (2) it is easy to train because the log likelihood of the model is concave, so it has no local maxima, and (3) one can train its class conditional distributions jointly to share information among the different classes. We first demonstrate that CKDE is more accurate in benchmark classification tasks than a purely discriminative method such as the SVM. We then show that the posterior probability of class labels is more accurately estimated than kernelized logistic regression. Our other results demonstrate that partial images can be accurately classified by marginalizing unobserved pixels from the class conditional distributions, and missing parts of the image can be painted in using the learned generative model.