On the Consistency of Bayesian Variable Selection for High Dimensional Binary Regression and Classification

  • Authors:
  • Wenxin Jiang

  • Affiliations:
  • Department of Statistics, Northwestern University, Evanston, IL 60208, U.S.A. wjiang@northwestern.edu

  • Venue:
  • Neural Computation
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. We use a prior to select a limited number of candidate variables to enter the model, applying a popular method with selection indicators. We show that this approach can induce posterior estimates of the regression functions that are consistently estimating the truth, if the true regression model is sparse in the sense that the aggregated size of the regression coefficients are bounded. The estimated regression functions therefore can also produce consistent classifiers that are asymptotically optimal for predicting future binary outputs. These provide theoretical justifications for some recent empirical successes in microarray data analysis.