Have I seen you before? Principles of Bayesian predictive classification revisited

  • Authors:
  • Jukka Corander;Yaqiong Cui;Timo Koski;Jukka Sirén

  • Affiliations:
  • Department of Mathematics and Statistics, University of Helsinki, Helsinki, Finland 00014 and Department of Mathematics, Åbo Akademi University, Åbo, Finland 20500;Department of Mathematics and Statistics, University of Helsinki, Helsinki, Finland 00014;Department of Mathematics, Royal Institute of Technology, Stockholm, Sweden 100 44;Department of Mathematics and Statistics, University of Helsinki, Helsinki, Finland 00014

  • Venue:
  • Statistics and Computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

A general inductive Bayesian classification framework is considered using a simultaneous predictive distribution for test items. We introduce a principle of generative supervised and semi-supervised classification based on marginalizing the joint posterior distribution of labels for all test items. The simultaneous and marginalized classifiers arise under different loss functions, while both acknowledge jointly all uncertainty about the labels of test items and the generating probability measures of the classes. We illustrate for data from multiple finite alphabets that such classifiers achieve higher correct classification rates than a standard marginal predictive classifier which labels all test items independently, when training data are sparse. In the supervised case for multiple finite alphabets the simultaneous and the marginal classifiers are proven to become equal under generalized exchangeability when the amount of training data increases. Hence, the marginal classifier can be interpreted as an asymptotic approximation to the simultaneous classifier for finite sets of training data. It is also shown that such convergence is not guaranteed in the semi-supervised setting, where the marginal classifier does not provide a consistent approximation.