Artificial Intelligence
Logical foundations of artificial intelligence
Logical foundations of artificial intelligence
On the limited memory BFGS method for large scale optimization
Mathematical Programming: Series A and B
An analysis of first-order logics of probability
Artificial Intelligence
Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
Representing and reasoning with probabilistic knowledge: a logical approach to probabilities
On the hardness of approximate reasoning
Artificial Intelligence
Machine Learning - special issue on inductive logic programming
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Reactive search, a history-sensitive heuristic for MAX-SAT
Journal of Experimental Algorithmics (JEA)
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Tabu Search
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Learning Logical Definitions from Relations
Machine Learning
Towards Combining Inductive Logic Programming with Bayesian Networks
ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
Maximum Entropy Modeling with Clausal Constraints
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Using weighted MAX-SAT engines to solve MPE
Eighteenth national conference on Artificial intelligence
Stochastic Local Search: Foundations & Applications
Stochastic Local Search: Foundations & Applications
Dependency Networks for Relational Data
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Shallow parsing with conditional random fields
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Learning the structure of Markov logic networks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Machine Learning
The relationship between Precision-Recall and ROC curves
ICML '06 Proceedings of the 23rd international conference on Machine learning
Randomised restarted search in ILP
Machine Learning
Entity Resolution with Markov Logic
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Integrating Naïve Bayes and FOIL
The Journal of Machine Learning Research
Bottom-up learning of Markov logic network structure
Proceedings of the 24th international conference on Machine learning
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Discriminative structure and parameter learning for Markov logic networks
Proceedings of the 25th international conference on Machine learning
Efficient Weight Learning for Markov Logic Networks
PKDD 2007 Proceedings of the 11th European conference on Principles and Practice of Knowledge Discovery in Databases
Discriminative Structure Learning of Markov Logic Networks
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Towards efficient sampling: exploiting random walk strategies
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
kFOIL: learning simple relational kernels
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Sound and efficient inference with probabilistic and deterministic dependencies
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
nFOIL: integrating Naïve Bayes and FOIL
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Discriminative training of Markov logic networks
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
A general method for reducing the complexity of relational inference and its application to MCMC
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Learning probabilistic relational models
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Paper: Robust taboo search for the quadratic assignment problem
Parallel Computing
Iterated robust tabu search for MAX-SAT
AI'03 Proceedings of the 16th Canadian society for computational studies of intelligence conference on Advances in artificial intelligence
Efficiently inducing features of conditional random fields
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
Markov Logic (ML) combines Markov networks (MNs) and first-order logic by attaching weights to first-order formulas and using these as templates for features of MNs. State-of-the-art structure learning algorithms in ML maximize the likelihood of a database by performing a greedy search in the space of structures. This can lead to suboptimal results because of the incapability of these approaches to escape local optima. Moreover, due to the combinatorially explosive space of potential candidates these methods are computationally prohibitive. We propose a novel algorithm for structure learning in ML, based on the Iterated Local Search (ILS) metaheuristic that explores the space of structures through a biased sampling of the set of local optima. We show through real-world experiments that the algorithm improves accuracy and learning time over the state-of-the-art algorithms. On the other side MAP and conditional inference for ML are hard computational tasks. This paper presents two algorithms for these tasks based on the Iterated Robust Tabu Search (IRoTS) metaheuristic. The first algorithm performs MAP inference and we show through extensive experiments that it improves over the state-of-the-art algorithm in terms of solution quality and inference time. The second algorithm combines IRoTS steps with simulated annealing steps for conditional inference and we show through experiments that it is faster than the current state-of-the-art algorithm maintaining the same inference quality.