The uncertain reasoner's companion: a mathematical perspective
The uncertain reasoner's companion: a mathematical perspective
Characterizing the principle of minimum cross-entropy within a conditional-logical framework
Artificial Intelligence
On first-order conditional logics
Artificial Intelligence
Reasoning about Uncertainty
Convex Optimization
Combining probabilistic logic programming with the power of maximum entropy
Artificial Intelligence - Special issue on nonmonotonic reasoning
Lifted probabilistic inference with counting formulas
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
First-order probabilistic inference
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Lifted first-order probabilistic inference
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Conditionals in nonmonotonic reasoning and belief revision: considering conditionals as agents
Conditionals in nonmonotonic reasoning and belief revision: considering conditionals as agents
On lifted inference for a relational probabilistic conditional logic with maximum entropy semantics
FoIKS'12 Proceedings of the 7th international conference on Foundations of Information and Knowledge Systems
Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
SUM'12 Proceedings of the 6th international conference on Scalable Uncertainty Management
Using equivalences of worlds for aggregation semantics of relational conditionals
KI'12 Proceedings of the 35th Annual German conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
The relational probabilistic conditional logic FO-PCL employs the principle of maximum entropy (ME). We show that parametric uniformity of an FO-PCL knowledge base $\mathcal R$ can be exploited for solving the optimization problem required for ME reasoning more efficiently. The original ME optimization problem containing a large number of linear constraints, one for each ground instance of a conditional, can be replaced by an optimization problem containing just one linear constraint for each conditional. We show that both optimization problems have the same ME distribution as solution. An implementation employing Generalized Iterative Scaling illustrates the benefits of our approach