Optimal classifiers with minimum expected error within a Bayesian framework - Part II: Properties and performance analysis

  • Authors:
  • Lori A. Dalton;Edward R. Dougherty

  • Affiliations:
  • The Ohio State University, Department of Electrical and Computer Engineering, 205 Dreese Laboratory, 2015 Neil Avenue, Columbus, OH 43210, USA;Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA and Computational Biology Division, Translational Genomics Research Institute, Phoenix, AZ, USA

  • Venue:
  • Pattern Recognition
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

In part I of this two-part study, we introduced a new optimal Bayesian classification methodology that utilizes the same modeling framework proposed in Bayesian minimum-mean-square error (MMSE) error estimation. Optimal Bayesian classification thus completes a Bayesian theory of classification, where both the classifier error and our estimate of the error may be simultaneously optimized and studied probabilistically within the assumed model. Having developed optimal Bayesian classifiers in discrete and Gaussian models in part I, here we explore properties of optimal Bayesian classifiers, in particular, invariance to invertible transformations, convergence to the Bayes classifier, and a connection to Bayesian robust classifiers. We also explicitly derive optimal Bayesian classifiers with non-informative priors, and explore relationships to linear and quadratic discriminant analysis (LDA and QDA), which may be viewed as plug-in rules under Gaussian modeling assumptions. Finally, we present several simulations addressing the robustness of optimal Bayesian classifiers to false modeling assumptions. Companion website: http://gsp.tamu.edu/Publications/supplementary/dalton12a.