SIAM Review
Journal of Multivariate Analysis
Stable local computation with conditional Gaussian distributions
Statistics and Computing
Mixtures of Truncated Exponentials in Hybrid Bayesian Networks
ECSQARU '01 Proceedings of the 6th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Approximate probability propagation with mixtures of truncated exponentials
International Journal of Approximate Reasoning
Maximum Likelihood Learning of Conditional MTE Distributions
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Learning hybrid Bayesian networks using mixtures of truncated exponentials
International Journal of Approximate Reasoning
Inference in hybrid Bayesian networks with mixtures of truncated exponentials
International Journal of Approximate Reasoning
Parameter estimation and model selection for mixtures of truncated exponentials
International Journal of Approximate Reasoning
Inference in hybrid Bayesian networks using mixtures of polynomials
International Journal of Approximate Reasoning
Mixtures of truncated basis functions
International Journal of Approximate Reasoning
Two issues in using mixtures of polynomials for inference in hybrid Bayesian networks
International Journal of Approximate Reasoning
Hi-index | 0.00 |
In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a kernel density representation of the data, the estimation method relies on the specification of a kernel bandwidth. We show that in most cases the method is robust wrt. the choice of bandwidth, but for certain data sets the bandwidth has a strong impact on the result. Based on this observation, we propose an alternative learning method that relies on the cumulative distribution function of the data. Empirical results demonstrate the usefulness of the approaches: Even though the methods produce estimators that are slightly poorer than the state of the art (in terms of log-likelihood), they are significantly faster, and therefore indicate that the MoTBF framework can be used for inference and learning in reasonably sized domains. Furthermore, we show how a particular sub-class of MoTBF potentials (learnable by the proposed methods) can be exploited to significantly reduce complexity during inference.