On the robustness of Bayesian networks to learning from non-conjugate sampling

  • Authors:
  • J. Q. Smith;A. Daneshkhah

  • Affiliations:
  • Department of Statistics, University of Warwick, Coventry CV4 7AL, United Kingdom;Department of Statistics, Faculty of Mathematical Sciences and Computer, Shahid Chamran University, Ahvaz 6135714463, Iran

  • Venue:
  • International Journal of Approximate Reasoning
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent results concerning the instability of Bayes Factor search over Bayesian Networks (BN's) lead us to ask whether learning the parameters of a selected BN might also depend heavily on the often rather arbitrary choice of prior density. Robustness of inferences to misspecification of the prior density would at least ensure that a selected candidate model would give similar predictions of future data points given somewhat different priors and a given large training data set. In this paper we derive new explicit total variation bounds on the calculated posterior density as the function of the closeness of the genuine prior to the approximating one used and certain summary statistics of the calculated posterior density. We show that the approximating posterior density often converges to the genuine one as the number of sample point increases and our bounds allow us to identify when the posterior approximation might not. To prove our general results we needed to develop a new family of distance measures called local DeRobertis distances. These provide coarse non-parametric neighbourhoods and allowed us to derive elegant explicit posterior bounds in total variation. The bounds can be routinely calculated for BNs even when the sample has systematically missing observations and no conjugate analyses are possible.