A Study of Methods for Systematically Abbreviating English Words and Names
Journal of the ACM (JACM)
Abbreviating words systematically
Communications of the ACM
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Shallow parsing with conditional random fields
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
SaRAD: a Simple and Robust Abbreviation Dictionary
Bioinformatics
A large scale, corpus-based approach for automatically disambiguating biomedical abbreviations
ACM Transactions on Information Systems (TOIS)
Journal of Computer Science and Technology
Combined one sense disambiguation of abbreviations
HLT-Short '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers
A discriminative alignment model for abbreviation recognition
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Bayesian information extraction network
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
A machine learning approach to acronym generation
ISMB '05 Proceedings of the ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Databases: Mining Biological Semantics
A supervised learning approach to acronym identification
AI'05 Proceedings of the 18th Canadian Society conference on Advances in Artificial Intelligence
Probabilistic Chinese word segmentation with non-local information and stochastic training
Information Processing and Management: an International Journal
Learning Abbreviations from Chinese and English Terms by Modeling Non-Local Information
ACM Transactions on Asian Language Information Processing (TALIP)
Hi-index | 0.00 |
The present paper describes a robust approach for abbreviating terms. First, in order to incorporate non-local information into abbreviation generation tasks, we present both implicit and explicit solutions: the latent variable model, or alternatively, the label encoding approach with global information. Although the two approaches compete with one another, we demonstrate that these approaches are also complementary. By combining these two approaches, experiments revealed that the proposed abbreviation generator achieved the best results for both the Chinese and English languages. Moreover, we directly apply our generator to perform a very different task from tradition, the abbreviation recognition. Experiments revealed that the proposed model worked robustly, and outperformed five out of six state-of-the-art abbreviation recognizers.