Three new probabilistic models for dependency parsing: an exploration
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 1
CoNLL-X shared task on multilingual dependency parsing
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Investigating multilingual dependency parsing
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Multilingual dependency analysis with a two-stage discriminative parser
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Labeled pseudo-projective dependency parsing with support vector machines
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
The exploration of deterministic and efficient dependency parsing
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Towards an n-version dependency parser
TSD'10 Proceedings of the 13th international conference on Text, speech and dialogue
Hi-index | 0.00 |
In the last years dependency parsing has been accomplished by machine learning–based systems showing great accuracy but usually under 90% for Labelled Attachment Score (LAS) Maltparser is one of such systems Machine learning allows to obtain parsers for every language having an adequate training corpus Since generally such systems can not be modified the following question arises: Can we beat this 90% LAS by using better training corpora? Some previous work points that high level techniques are not sufficient for building more accurate training corpora Thus, by analyzing the words that are more frequently incorrectly attached or labelled, we study the feasibility of some low level techniques, based on n–version parsing models, in order to obtain better parsing accuracy.