Logical foundations of artificial intelligence
Logical foundations of artificial intelligence
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
On the limited memory BFGS method for large scale optimization
Mathematical Programming: Series A and B
Inducing Features of Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Inference in Hybrid Networks: Theoretical Limits and Practical Algorithms
UAI '01 Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence
Inference and Learning in Hybrid Bayesian Networks
Inference and Learning in Hybrid Bayesian Networks
Learning the structure of Markov logic networks
ICML '05 Proceedings of the 22nd international conference on Machine learning
Machine Learning
Predicting Structured Data (Neural Information Processing)
Predicting Structured Data (Neural Information Processing)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Towards efficient sampling: exploiting random walk strategies
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Sound and efficient inference with probabilistic and deterministic dependencies
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Relational object maps for mobile robots
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Just Add Weights: Markov Logic for the Semantic Web
Uncertainty Reasoning for the Semantic Web I
Growing a tree in the forest: constructing folksonomies by integrating structured metadata
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning complex action models with quantifiers and logical implications
Artificial Intelligence
Neuro-symbolic representation of logic programs defining infinite sets
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
Semantic mapping with a probabilistic description logic
SBIA'10 Proceedings of the 20th Brazilian conference on Advances in artificial intelligence
A probabilistic approach for learning folksonomies from structured data
Proceedings of the fourth ACM international conference on Web search and data mining
Collective graph identification
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Extending probLog with continuous distributions
ILP'10 Proceedings of the 20th international conference on Inductive logic programming
Gaussian logic for predictive classification
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Lifted relational Kalman filtering
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Rethinking cognitive architecture via graphical models
Cognitive Systems Research
Event processing under uncertainty
Proceedings of the 6th ACM International Conference on Distributed Event-Based Systems
Location-based reasoning about complex multi-agent behavior
Journal of Artificial Intelligence Research
Inference in probabilistic logic programs with continuous random variables
Theory and Practice of Logic Programming
Model checking with probabilistic tabled logic programming
Theory and Practice of Logic Programming
Hi-index | 0.00 |
Markov logic networks (MLNs) combine first-order logic and Markov networks, allowing us to handle the complexity and uncertainty of real-world problems in a single consistent framework. However, in MLNs all variables and features are discrete, while most real-world applications also contain continuous ones. In this paper we introduce hybrid MLNs, in which continuous properties (e.g., the distance between two objects) and functions over them can appear as features. Hybrid MLNs have all distributions in the exponential family as special cases (e.g., multivariate Gaussians), and allow much more compact modeling of non-i.i.d. data than propositional representations like hybrid Bayesian networks. We also introduce inference algorithms for hybrid MLNs, by extending the MaxWalkSAT and MC-SAT algorithms to continuous domains. Experiments in a mobile robot mapping domain--involving joint classification, clustering and regression--illustrate the power of hybrid MLNs as a modeling language, and the accuracy and efficiency of the inference algorithms.