On the hardness of approximate reasoning
Artificial Intelligence
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Tabu Search
Using weighted MAX-SAT engines to solve MPE
Eighteenth national conference on Artificial intelligence
Machine Learning
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Efficient Weight Learning for Markov Logic Networks
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Towards efficient sampling: exploiting random walk strategies
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Sound and efficient inference with probabilistic and deterministic dependencies
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Discriminative training of Markov logic networks
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Paper: Robust taboo search for the quadratic assignment problem
Parallel Computing
Iterated robust tabu search for MAX-SAT
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
Hi-index | 0.00 |
Markov Logic (ML) combines Markov networks (MNs) and first-order logic by attaching weights to first-order formulas and using these as templates for features of MNs. However, MAP and conditional inference in ML are hard computational tasks. This paper presents two algorithms for these tasks based on the Iterated Robust Tabu Search (IRoTS) metaheuristic. The first algorithm performs MAP inference by performing a biased sampling of the set of local optima. Extensive experiments show that it improves over the state-of-the-art algorithm in terms of solution quality and inference times. The second algorithm combines IRoTS with simulated annealing for conditional inference and we show through experiments that it is faster than the current state-of-the-art algorithm maintaining the same inference quality.