Learning Subsequential Transducers for Pattern Recognition Interpretation Tasks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Application of OSTIA to Machine Translation Tasks
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
Using domain information during the learning of a subsequential transducer
ICG! '96 Proceedings of the 3rd International Colloquium on Grammatical Inference: Learning Syntax from Sentences
The mathematics of statistical machine translation: parameter estimation
Computational Linguistics - Special issue on using large corpora: II
The EuTrans Spoken Language Translation System
Machine Translation
Learning dependency translation models as collections of finite-state head transducers
Computational Linguistics - Special issue on finite-state methods in NLP
Automatic acquisition of hierarchical transduction models for machine translation
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Learning finite-state models for machine translation
Machine Learning
Hi-index | 0.00 |
The full paper explores the possibility of using Subsequential Transducers (SST), a finite state model, in limited domain translation tasks, both for text and speech input. A distinctive advantage of SSTs is that they can be efficiently learned from sets of input-output examples by means of OSTIA, the Onward Subsequential Transducer Inference Algorithm (Oncina et al. 1993). In this work a technique is proposed to increase the performance of OSTIA by reducing the asynchrony between the input and output sentences, the use of error correcting parsing to increase the robustness of the models is explored, and an integrated architecture for speech input translation by means of SSTs is described.