A maximum entropy approach to natural language processing
Computational Linguistics
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Algorithm that Learns What‘s in a Name
Machine Learning - Special issue on natural language learning
The Hierarchical Hidden Markov Model: Analysis and Applications
Machine Learning
A maximum entropy approach to named entity recognition
A maximum entropy approach to named entity recognition
EACL '99 Proceedings of the ninth conference on European chapter of the Association for Computational Linguistics
Improved source-channel models for Chinese word segmentation
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Accurate unlexicalized parsing
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Hierarchical hidden Markov models for information extraction
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Hi-index | 0.00 |
The paper discusses two policies for recognizing NEs with complex structures by maximum entropy models. One policy is to develop cascaded MaxEnt models at different levels. The other is to design more detailed tags with human knowledge in order to represent complex structures. The experiments on Chinese organization names recognition indicate that layered structures result in more accurate models while extended tags can not lead to positive results as expected. We empirically prove that the {start, continue, end, unique, other} tag set is the best tag set for NE recognition with MaxEnt models.