Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Maximum Entropy Markov Models for Information Extraction and Segmentation
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Semantic role labeling via integer linear programming inference
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Integer linear programming inference for conditional random fields
ICML '05 Proceedings of the 22nd international conference on Machine learning
Piecewise pseudolikelihood for efficient training of conditional random fields
Proceedings of the 24th international conference on Machine learning
Structure compilation: trading structure for features
Proceedings of the 25th international conference on Machine learning
Towards robust semantic role labeling
Computational Linguistics
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Extracting article text from the web with maximum subsequence segmentation
Proceedings of the 18th international conference on World wide web
Scaling conditional random fields by one-against-the-other decomposition
Journal of Computer Science and Technology
Search-based structured prediction
Machine Learning
Minimally supervised model of early language acquisition
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning
Deriving a large scale taxonomy from Wikipedia
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Learning and inference with constraints
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Piecewise training for structured prediction
Machine Learning
Practical very large scale CRFs
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Dual decomposition for parsing with non-projective head automata
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Knowing what to believe (when you already know something)
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Punctuation: making a point in unsupervised dependency parsing
CoNLL '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning
Margin-Based active learning for structured output spaces
ECML'06 Proceedings of the 17th European conference on Machine Learning
A joint model for extended semantic role labeling
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Data Mining and Knowledge Discovery
Unsupervised learning on an approximate corpus
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Hi-index | 0.00 |
We study learning structured output in a discriminative framework where values of the output variables are estimated by local classifiers. In this framework, complex dependencies among the output variables are captured by constraints and dictate which global labels can be inferred. We compare two strategies, learning independent classifiers and inference based training, by observing their behaviors in different conditions. Experiments and theoretical justification lead to the conclusion that using inference based learning is superior when the local classifiers are difficult to learn but may require many examples before any discernible difference can be observed.