Logical foundations of artificial intelligence
Logical foundations of artificial intelligence
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Machine Learning
Collective information extraction with relational Markov networks
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Incorporating non-local information into information extraction systems by Gibbs sampling
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Webpage understanding: an integrated approach
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Coreference resolution using expressive logic models
Proceedings of the 17th ACM conference on Information and knowledge management
Sound and efficient inference with probabilistic and deterministic dependencies
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Joint inference in information extraction
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
Relation extraction from wikipedia using subtree mining
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Towards a top-down and bottom-up bidirectional approach to joint information extraction
Proceedings of the 20th ACM international conference on Information and knowledge management
Hi-index | 0.00 |
Probabilistic graphical models for sequence data enable us to effectively deal with inherent uncertainty in many real-world domains. However, they operate on a mostly propositional level. Logic approaches, on the other hand, can compactly represent a wide variety of knowledge, especially first-order ones, but treat uncertainty only in limited ways. Therefore, combining probability and first-order logic is highly desirable for information extraction which requires uncertainty modeling as well as dependency and deeper knowledge representation. In this paper, we model both segmentations in observation sequence and relations of segments simultaneously in our proposed integrated discriminative probabilistic framework. We propose the Metropolis-Hastings, a Markov chain Monte Carlo (MCMC) algorithm for approximate Bayesian inference to find the maximum a posteriori assignment of all the variables of this model. This integrated model has several advantages over previous probabilistic graphical models, and it offers a great capability of extracting implicit relations and new relation discovery for relation extraction from encyclopedic documents, and capturing sub-structures in named entities for named entity recognition. We performed extensive experiments on the above two well-established information extraction tasks, illustrating the feasibility and promise of our approach.