Statistical methods for speech recognition
Statistical methods for speech recognition
Target-Text Mediated Interactive Machine Translation
Machine Translation
On the Estimation of 'Small' Probabilities by Leaving-One-Out
IEEE Transactions on Pattern Analysis and Machine Intelligence
The mathematics of statistical machine translation: parameter estimation
Computational Linguistics - Special issue on using large corpora: II
Discriminative training and maximum entropy models for statistical machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Generation of word graphs in statistical machine translation
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
N-gram-based Machine Translation
Computational Linguistics
Statistical approaches to computer-assisted translation
Computational Linguistics
Inference of finite-state transducers from regular languages
Pattern Recognition
Multimodal interactive machine translation
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
IEEE Transactions on Audio, Speech, and Language Processing
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
Computer-assisted translation using speech recognition
IEEE Transactions on Audio, Speech, and Language Processing
Integration of Speech Recognition and Machine Translation in Computer-Assisted Translation
IEEE Transactions on Audio, Speech, and Language Processing
Improving on-line handwritten recognition in interactive machine translation
Pattern Recognition
Hi-index | 0.00 |
Interactive machine translation (IMT) is an increasingly popular paradigm for semi-automated machine translation, where a human expert is integrated into the core of an automatic machine translation system. The human expert interacts with the IMT system by partially correcting the errors of the system's output. Then, the system proposes a new solution. This process is repeated until the output meets the desired quality. In this scenario, the interaction is typically performed using the keyboard and the mouse. However, speech is also a very interesting input modality since the user does not need to abandon the keyboard to interact with it. In this work, we present a new approach to perform speech interaction in a way that translation and speech inputs are tightly fused. This integration is performed early in the speech recognition step. Thus, the information from the translation models allows the speech recognition system to recover from errors that otherwise would be impossible to amend. In addition, this technique allows to use currently available speech recognition technology. The proposed system achieves an important boost in performance with respect to previous approaches.