Context-sensitive statistics for improved grammatical language models
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Statistical Part-of-Speech Tagging for Classical Chinese
TSD '02 Proceedings of the 5th International Conference on Text, Speech and Dialogue
Three generative, lexicalised models for statistical parsing
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Statistical decision-tree models for parsing
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
Towards history-based grammars: using richer models for probabilistic parsing
HLT '91 Proceedings of the workshop on Speech and Natural Language
PCFG parsing for restricted classical Chinese texts
SIGHAN '02 Proceedings of the first SIGHAN workshop on Chinese language processing - Volume 18
Statistical parsing with a context-free grammar and word statistics
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Hi-index | 0.00 |
In this paper, we compare the performance of three probabilistic pseudo context-sensitive models on parsing isolating languages. These models are all based on the conventional probabilistic context-free grammar (PCFG). The first one is well known for statistical parsing of English, while the other two are novel models conditioning the siblings of an expanding nonterminal. We experiment these models on Classical Chinese, a typical isolating language. And it is quite surprising to see that through only a little more conditioning, the new models significantly outperform the first model. To this end, our work shows the impact of typological distinction on parsing and provides two simple-yet-effective conditioning models for isolating languages.