Learning to Parse Natural Language with Maximum Entropy Models
Machine Learning - Special issue on natural language learning
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Chunking with support vector machines
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Shallow parsing with conditional random fields
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
On the parameter space of generative lexicalized statistical parsing models
On the parameter space of generative lexicalized statistical parsing models
Chunking with maximum entropy models
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
A maximum entropy Chinese character-based parser
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
A fast, accurate deterministic parser for Chinese
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Effective self-training for parsing
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Multilingual dependency analysis with a two-stage discriminative parser
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
A classifier-based parser with linear run-time complexity
Parsing '05 Proceedings of the Ninth International Workshop on Parsing Technology
Hi-index | 0.00 |
We present a deterministic model to predict all the phrase boundaries of a syntactic tree, including base constituent boundaries and nested constituent boundaries. The model only uses the word and part-of-speech (POS) information, while general parsers also use the phrase type information. Our model is divided into two stages and finally turned into four classification sub-models. The f-score of our model is comparable to Stanford parser's PCFG model and factored model when tested on Penn Treebank Section 23 using gold-standard POS tags, which shows that phrase boundary identification could be done without phrase labels and could achieve comparable result to Stanford parser.