BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
A multi modal supporting tool for multi lingual communication by inducing partner's reply
Proceedings of the 11th international conference on Intelligent user interfaces
A Thai speech translation system for medical dialogs
HLT-NAACL--Demonstrations '04 Demonstration Papers at HLT-NAACL 2004
MST '06 Proceedings of the Workshop on Medical Speech Translation
Hi-index | 0.00 |
A review of publications by and about medical interpreters reveals a number of operational similarities and shared attitudes and beliefs with the medical coding and abstracting community as it existed ten years ago in the mid-1990's. At that time, the first of what have now become several successful commercial products using Natural Language Processing (NLP) for automated coding and abstracting appeared. The initial reaction was that machines could never do what human coders and abstractors do, and anecdotal accounts illustrating the difficulty of the task proliferated. The claims of superior human capabilities and the accuracy of the anecdotal accounts were and are substantially true, but the fact is that the machines are more capable than what they were initially given credit for, and the percentage of cases that can be handled with automation fairly well approximates the 80/20 rule. In this paper, we present an early stage prototype medical interpreter system that is based on lessons learned in developing successful automated coding and abstracting systems and on the core infrastructure and techniques used in these systems. Specific techniques include leveraging standards based multi-lingual medical nomenclatures and clinical ontology systems, machine awareness of difficult situations, explanatory meta-knowledge, and an interactive environment that emphasizes the strengths of both the human and machine participants and mitigates the weaknesses of each.