The well-founded semantics for general logic programs
Journal of the ACM (JACM)
The independent choice logic for modelling multiple agents under uncertainty
Artificial Intelligence - Special issue on economic principles of multi-agent systems
Machine Learning - Special issue on inducive logic programming
DLAB: A Declarative Language Bias Formalism
ISMIS '96 Proceedings of the 9th International Symposium on Foundations of Intelligent Systems
Stochastic Logic Programs: Sampling, Inference and Applications
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Bayesian Logic Programs
Journal of Artificial Intelligence Research
Parameter learning of logic programs for symbolic-statistical modeling
Journal of Artificial Intelligence Research
TildeCRF: conditional random fields for logical sequences
ECML'06 Proceedings of the 17th European conference on Machine Learning
CLP(BN): constraint logic programming for probabilistic knowledge
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
CLASSIC'CL: an integrated ILP system
DS'05 Proceedings of the 8th international conference on Discovery Science
Learning Ground CP-Logic Theories by Leveraging Bayesian Network Learning Techniques
Fundamenta Informaticae - Progress on Multi-Relational Data Mining
An overview of AI research in Italy
Artificial intelligence
Learning Ground CP-Logic Theories by Leveraging Bayesian Network Learning Techniques
Fundamenta Informaticae - Progress on Multi-Relational Data Mining
MCINTYRE: A Monte Carlo System for Probabilistic Logic Programming
Fundamenta Informaticae - Special Issue on the Italian Conference on Computational Logic: CILC 2011
Expectation maximization over binary decision diagrams for probabilistic logic programs
Intelligent Data Analysis
Hi-index | 0.00 |
Logic Programs with Annotated Disjunctions (LPADs) provide a simple and elegant framework for representing probabilistic knowledge in logic programming. In this paper we consider the problem of learning ground LPADs starting from a set of interpretations annotated with their probability. We present the system ALLPAD for solving this problem. ALLPAD modifies the previous system LLPAD in order to tackle real world learning problems more effectively. This is achieved by looking for an approximate solution rather than a perfect one. A number of experiments have been performed on real and artificial data for evaluating ALLPAD, showing the feasibility of the approach.