A maximum entropy approach to natural language processing
Computational Linguistics
Using decision trees to construct a practical parser
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
A new statistical parser based on bigram lexical dependencies
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
Committee-based decision making in probabilistic partial parsing
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
A hybrid Japanese parser with hand-crafted grammar and statistics
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Bunsetsu identification using category-exclusive rules
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Backward beam search algorithm for dependency analysis of Japanese
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Stochastic dependency parsing of spontaneous Japanese spoken language
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Named entity extraction based on a maximum entropy model and transformation rules
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Estimating satisfactoriness of selectional restriction from corpus without a thesaurus
ACM Transactions on Asian Language Information Processing (TALIP)
Japanese dependency structure analysis based on support vector machines
EMNLP '00 Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13
Japanese dependency analysis using cascaded chunking
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
Dependency parsing of Japanese spoken monologue based on clause boundaries
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Linear-time dependency analysis for Japanese
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Dependency structure analysis and sentence boundary detection in spontaneous Japanese
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Japanese dependency analysis based on improved SVM and KNN
SMO'07 Proceedings of the 7th WSEAS International Conference on Simulation, Modelling and Optimization
Japanese dependency parsing using sequential labeling for semi-spoken language
ACL '07 Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions
A unified single scan algorithm for Japanese base phrase chunking and dependency parsing
ACLShort '09 Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Dependency parsing and projection based on word-pair classification
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Using smaller constituents rather than sentences in active learning for Japanese dependency parsing
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Japanese dependency analysis based on parallel relation
ICCOMP'06 Proceedings of the 10th WSEAS international conference on Computers
Using a partially annotated corpus to build a dependency parser for japanese
IJCNLP'05 Proceedings of the Second international joint conference on Natural Language Processing
Combine constituent and dependency parsing via reranking
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
This paper describes a dependency structure analysis of Japanese sentences based on the maximum entropy models. Our model is created by learning the weights of some features from a training corpus to predict the dependency between bunsetsus or phrasal units. The dependency accuracy of our system is 87.2% using the Kyoto University corpus. We discuss the contribution of each feature set and the relationship between the number of training data and the accuracy.