Aggregating learned probabilistic beliefs

  • Authors:
  • Pedrito Maynard-Reid;Urszula Chajewska

  • Affiliations:
  • Computer Science Department, Stanford University, Stanford, CA;Computer Science Department, Stanford University, Stanford, CA

  • Venue:
  • UAI'01 Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the task of aggregating beliefs of several experts. We assume that these beliefs are represented as probability distributions. We argue that the evaluation of any aggregation technique depends on the semantic context of this task. We propose a framework, in which we assume that nature generates samples from a 'true' distribution and different experts form their beliefs based on the subsets of the data they have a chance to observe. Naturally, the optimal aggregate distribution would be the one learned from the combined sample sets. Such a formulation leads to a natural way to measure the accuracy of the aggregation mechanism. We show that the well-known aggregation operator LinOP is ideally suited for that task. We propose a LinOP-based learning algorithm, inspired by the techniques developed for Bayesian learning, which aggregates the experts' distributions represented as Bayesian networks. We show experimentally that this algorithm performs well in practice.