Instance-Based Learning Algorithms
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
A maximum entropy approach to natural language processing
Computational Linguistics
Foundations of statistical natural language processing
Foundations of statistical natural language processing
Machine Learning
Artificial Intelligence Review - Special issue on lazy learning
Machine Learning
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
English-to-Korean transliteration using multiple unbounded overlapping phoneme chunks
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Translating named entities using monolingual and bilingual resources
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Backward machine transliteration by learning phonetic similarity
COLING-02 proceedings of the 6th conference on Natural language learning - Volume 20
ICASSP '91 Proceedings of the Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference
Maximum entropy estimation for feature forests
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Transliteration for Resource-Scarce Languages
ACM Transactions on Asian Language Information Processing (TALIP)
Machine transliteration survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Machine transliteration is an automatic method for converting words in one language into phonetically equivalent ones in another language. There has been growing interest in the use of machine transliteration to assist machine translation and information retrieval. Three types of machine transliteration models---grapheme-based, phoneme-based, and hybrid---have been proposed. Surprisingly, there have been few reports of efforts to utilize the correspondence between source graphemes and source phonemes, although this correspondence plays an important role in machine transliteration. Furthermore, little work has been reported on ways to dynamically handle source graphemes and phonemes. In this paper, we propose a transliteration model that dynamically uses both graphemes and phonemes, particularly the correspondence between them. With this model, we have achieved better performance---improvements of about 15 to 41% in English-to-Korean transliteration and about 16 to 44% in English-to-Japanese transliteration---than has been reported for other models.