Artificial Intelligence - Special volume on natural language processing
Probabilistic Horn abduction and Bayesian networks
Artificial Intelligence
Machine Learning
Efficient Weight Learning for Markov Logic Networks
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Joint unsupervised coreference resolution with Markov logic
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Max-Margin Weight Learning for Markov Logic Networks
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Unsupervised learning of narrative schemas and their participants
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
Probabilistic semantics for cost based abduction
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 1
A probabilistic model of plan recognition
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 1
Learning first-order Horn clauses from web text
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Unsupervised discovery of domain-specific knowledge from text
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Implementing weighted abduction in Markov logic
IWCS '11 Proceedings of the Ninth International Conference on Computational Semantics
Abductive reasoning with a large knowledge base for discourse processing
IWCS '11 Proceedings of the Ninth International Conference on Computational Semantics
Joint learning for coreference resolution with Markov logic
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
JELIA'12 Proceedings of the 13th European conference on Logics in Artificial Intelligence
Hi-index | 0.00 |
Abduction is inference to the best explanation. Abduction has long been studied in a wide range of contexts and is widely used for modeling artificial intelligence systems, such as diagnostic systems and plan recognition systems. Recent advances in the techniques of automatic world knowledge acquisition and inference technique warrant applying abduction with large knowledge bases to real-life problems. However, less attention has been paid to how to automatically learn score functions, which rank candidate explanations in order of their plausibility. In this paper, we propose a novel approach for learning the score function of first-order logic-based weighted abduction [1] in a supervised manner. Because the manual annotation of abductive explanations (i.e. a set of literals that explains observations) is a time-consuming task in many cases, we propose a framework to learn the score function from partially annotated abductive explanations (i.e. a subset of those literals). More specifically, we assume that we apply abduction to a specific task, where a subset of the best explanation is associated with output labels, and the rest are regarded as hidden variables. We then formulate the learning problem as a task of discriminative structured learning with hidden variables. Our experiments show that our framework successfully reduces the loss in each iteration on a plan recognition dataset.