A maximum entropy approach to natural language processing
Computational Linguistics
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Statistical decision-tree models for parsing
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
A new statistical parser based on bigram lexical dependencies
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
Learning dependencies between case frame slots
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 1
Semantic classes and syntactic ambiguity
HLT '93 Proceedings of the workshop on Human Language Technology
Statistical parsing with a context-free grammar and word statistics
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Japanese case structure analysis by unsupervised construction of a case frame dictionary
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Hi-index | 0.00 |
This paper proposes a novel method for learning probability models of subcategorization preference of verbs. We consider the issues of case dependencies and noun class generalization in a uniform way by employing the maximum entropy modeling method. We also propose a new model selection algorithm which starts from the most general model and gradually examines more specific models. In the experimental evaluation, it is shown that both of the case dependencies and specific sense restriction selected by the proposed method contribute to improving the performance in subcategorization preference resolution.