Artificial Intelligence
Logical foundations of artificial intelligence
Logical foundations of artificial intelligence
An analysis of first-order logics of probability
Artificial Intelligence
Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
Machine Learning - special issue on inductive logic programming
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
Learning Logical Definitions from Relations
Machine Learning
Learning Probabilistic Relational Models
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Stochastic Local Search: Foundations & Applications
Stochastic Local Search: Foundations & Applications
Dependency Networks for Relational Data
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Shallow parsing with conditional random fields
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Learning the structure of Markov logic networks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Machine Learning
The relationship between Precision-Recall and ROC curves
ICML '06 Proceedings of the 23rd international conference on Machine learning
Entity Resolution with Markov Logic
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Bottom-up learning of Markov logic network structure
Proceedings of the 24th international conference on Machine learning
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Sound and efficient inference with probabilistic and deterministic dependencies
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Efficiently inducing features of conditional random fields
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Learning Markov logic network structure via hypergraph lifting
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Generative Structure Learning for Markov Logic Networks
Proceedings of the 2010 conference on STAIRS 2010: Proceedings of the Fifth Starting AI Researchers' Symposium
Discriminative Markov logic network structure learning based on propositionalization and X2-test
ADMA'10 Proceedings of the 6th international conference on Advanced data mining and applications: Part I
Generative structure learning for Markov logic networks based on graph of predicates
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Hi-index | 0.00 |
Many real-world applications of AI require both probability and first-order logic to deal with uncertainty and structural complexity. Logical AI has focused mainly on handling complexity, and statistical AI on handling uncertainty. Markov Logic Networks (MLNs) are a powerful representation that combine Markov Networks (MNs) and first-order logic by attaching weights to first-order formulas and viewing these as templates for features of MNs. State-of-the-art structure learning algorithms of MLNs maximize the likelihood of a relational database by performing a greedy search in the space of candidates. This can lead to suboptimal results because of the incapability of these approaches to escape local optima. Moreover, due to the combinatorially explosive space of potential candidates these methods are computationally prohibitive. We propose a novel algorithm for learning MLNs structure, based on the Iterated Local Search (ILS) metaheuristic that explores the space of structures through a biased sampling of the set of local optima. The algorithm focuses the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for the optimization engine. We show through experiments in two real-world domains that the proposed approach improves accuracy and learning time over the existing state-of-the-art algorithms.