A maximum entropy approach to natural language processing
Computational Linguistics
Maximum entropy models for natural language ambiguity resolution
Maximum entropy models for natural language ambiguity resolution
Maximum entropy estimation for feature forests
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Empirical evaluations of animacy annotation
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
LAC '06 Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006
Arabic Natural Language Processing: Challenges and Solutions
ACM Transactions on Asian Language Information Processing (TALIP)
Hi-index | 0.00 |
In this paper, we discuss an application of Maximum Entropy to modeling the acquisition of subject and object processing in Italian. The model is able to learn from corpus data a set of experimentally and theoretically well-motivated linguistic constraints, as well as their relative salience in Italian grammar development and processing. The model is also shown to acquire robust syntactic generalizations by relying on the evidence provided by a small number of high token frequency verbs only. These results are consistent with current research focusing on the role of high frequency verbs in allowing children to converge on the most salient constraints in the grammar.