Discovery of linguistic relations using lexical attraction
Discovery of linguistic relations using lexical attraction
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Parsing free word order languages in the Paninian framework
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
XTAG system: a wide coverage grammar for English
COLING '94 Proceedings of the 15th conference on Computational linguistics - Volume 2
A statistical parser for Czech
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Unsupervised learning of dependency structure for language modeling
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
The unsupervised learning of natural language structure
The unsupervised learning of natural language structure
Online large-margin training of dependency parsers
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
An all-subtrees approach to unsupervised parsing
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Discriminative learning and spanning tree algorithms for dependency parsing
Discriminative learning and spanning tree algorithms for dependency parsing
On the complexity of non-projective data-driven dependency parsing
IWPT '07 Proceedings of the 10th International Conference on Parsing Technologies
Hi-index | 0.00 |
The structure of a sentence can be seen as a spanning tree in a linguistically augmented graph of syntactic nodes. This paper presents an approach for unlabeled dependency parsing based on this view. The first step involves marking the chunks and the chunk heads of a given sentence and then identifying the intra-chunk dependency relations. The second step involves learning to identify the inter-chunk dependency relations. For this, we use an initialization technique based on a measure we call Normalized Conditional Mutual Information (NCMI), in addition to a few linguistic constraints. We present the results for Hindi. We have achieved a precision of 80.83% for sentences of size less than 10 words and 66.71% overall. This is significantly better than the baseline in which random initialization is used.