Artificial Intelligence
An analysis of first-order logics of probability
Artificial Intelligence
Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tabu Search
Using weighted MAX-SAT engines to solve MPE
Eighteenth national conference on Artificial intelligence
Stochastic Local Search: Foundations & Applications
Stochastic Local Search: Foundations & Applications
Machine Learning
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Efficient Weight Learning for Markov Logic Networks
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Discriminative training of Markov logic networks
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Paper: Robust taboo search for the quadratic assignment problem
Parallel Computing
Iterated robust tabu search for MAX-SAT
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
Hi-index | 0.00 |
Statistical Relational Models are state-of-the-art representation formalisms at the intersection of logical and statistical machine learning. One of the most promising models is Markov Logic (ML) which combines Markov networks (MNs) and first-order logic by attaching weights to first-order formulas and using these as templates for features of MNs. MAP inference in ML is the task of finding the most likely state of a set of output variables given the state of the input variables and this problem is NP-hard. In this paper we present an algorithm for this inference task based on the Iterated Local Search (ILS) and Robust Tabu Search (RoTS) metaheuristics. The algorithm performs a biased sampling of the set of local optima by using RoTS as a local search procedure and repetitively jumping in the search space through a perturbation operator, focusing the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for the optimization engine. We show through extensive experiments in real-world domains that it improves over the state-of-the-art algorithm in terms of solution quality and inference time.