Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Maximum Entropy Markov Models for Information Extraction and Segmentation
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Machine learning for information extraction in informal domains
Machine learning for information extraction in informal domains
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Feature-rich part-of-speech tagging with a cyclic dependency network
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Learning as search optimization: approximate large margin methods for structured prediction
ICML '05 Proceedings of the 22nd international conference on Machine learning
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Accelerated training of conditional random fields with stochastic gradient methods
ICML '06 Proceedings of the 23rd international conference on Machine learning
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Learning and inference over constrained output
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Structured machine learning: the next ten years
Machine Learning
Approximate Parameter Learning in Conditional Random Fields: An Empirical Investigation
Proceedings of the 30th DAGM symposium on Pattern Recognition
Multi-domain spoken language understanding with transfer learning
Speech Communication
Scaling conditional random fields by one-against-the-other decomposition
Journal of Computer Science and Technology
Uncertainty management in rule-based information extraction systems
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Piecewise training for structured prediction
Machine Learning
Learning conditional random fields for classification of hyperspectral images
IEEE Transactions on Image Processing
Classification and Semantic Mapping of Urban Environments
International Journal of Robotics Research
Stochastic Composite Likelihood
The Journal of Machine Learning Research
Labelwise margin maximization for sequence labeling
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part I
Hierarchical conditional random fields for detection of gad-enhancing lesions in multiple sclerosis
MICCAI'12 Proceedings of the 15th international conference on Medical Image Computing and Computer-Assisted Intervention - Volume Part II
Hi-index | 0.00 |
Discriminative training of graphical models can be expensive if the variables have large cardinality, even if the graphical structure is tractable. In such cases, pseudolikelihood is an attractive alternative, because its running time is linear in the variable cardinality, but on some data its accuracy can be poor. Piecewise training (Sutton & McCallum, 2005) can have better accuracy but does not scale as well in the variable cardinality. In this paper, we introduce piecewise pseudolikelihood, which retains the computational efficiency of pseudolikelihood but can have much better accuracy. On several benchmark NLP data sets, piecewise pseudolikelihood has better accuracy than standard pseudolikelihood, and in many cases nearly equivalent to maximum likelihood, with five to ten times less training time than batch CRF training.