Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
An Algorithm that Learns What‘s in a Name
Machine Learning - Special issue on natural language learning
Information retrieval as statistical translation
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Summarization beyond sentence extraction: a probabilistic approach to sentence compression
Artificial Intelligence
The mathematics of statistical machine translation: parameter estimation
Computational Linguistics - Special issue on using large corpora: II
A stochastic parts program and noun phrase parser for unrestricted text
ANLC '88 Proceedings of the second conference on Applied natural language processing
A syntax-based statistical translation model
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
A noisy-channel approach to question answering
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Loosely tree-based alignment for machine translation
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Headline generation based on statistical translation
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
A phrase-based, joint probability model for statistical machine translation
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
Learning strategies for story comprehension: a reinforcement learning approach
ICML '05 Proceedings of the 22nd international conference on Machine learning
Unsupervised learning of verb argument structures
CICLing'06 Proceedings of the 7th international conference on Computational Linguistics and Intelligent Text Processing
Hi-index | 0.00 |
Probabilistic generative models have been applied successfully in a wide range of applications that range from speech recognition and part of speech tagging, to machine translation and information retrieval, but, traditionally, applications such as reasoning have been thought to fall outside the scope of the generative framework for both theoretical and practical reasons. Theoretically, it is difficult to imagine, for example, what a reasonable generative story for first-order logic inference might look like. Practically, even if we can conceive of such a story, it is unclear how one can obtain sufficient amounts of training data. In this paper, we discuss how by embracing a less restrictive notion of inference, one can build generative models of inference that can be trained on massive amounts of naturally occurring texts; and text-based deduction and abduction decoding algorithms.