Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Principled Hybrids of Generative and Discriminative Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting
The Journal of Machine Learning Research
Multi-conditional learning: generative/discriminative training for clustering and classification
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
A New Generative Feature Set Based on Entropy Distance for Discriminative Classification
ICIAP '09 Proceedings of the 15th International Conference on Image Analysis and Processing
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Model selection with the Loss Rank Principle
Computational Statistics & Data Analysis
Wearable sensor activity analysis using semi-Markov models with a grammar
Pervasive and Mobile Computing
A robust semi-supervised classification method for transfer learning
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
Stochastic Composite Likelihood
The Journal of Machine Learning Research
Classification with Incomplete Data Using Dirichlet Process Priors
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Integrating Generative and Discriminative Character-Based Models for Chinese Word Segmentation
ACM Transactions on Asian Language Information Processing (TALIP)
Learning algorithms for the classification restricted Boltzmann machine
The Journal of Machine Learning Research
On Learning Conditional Random Fields for Stereo
International Journal of Computer Vision
Object class detection: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Statistical and computational concerns have motivated parameter estimators based on various forms of likelihood, e.g., joint, conditional, and pseudolikelihood. In this paper, we present a unified framework for studying these estimators, which allows us to compare their relative (statistical) efficiencies. Our asymptotic analysis suggests that modeling more of the data tends to reduce variance, but at the cost of being more sensitive to model misspecification. We present experiments validating our analysis.