Multi-class sparse Bayesian regression for neuroimaging data analysis

  • Authors:
  • Vincent Michel;Evelyn Eger;Christine Keribin;Bertrand Thirion

  • Affiliations:
  • Parietal Team, INRIA Saclay-Île-de-France, Saclay, France and Université Paris-Sud 11, Orsay, France and CEA, DSV, I2BM, Neurospin, Gif/Yvette, France;INSERM, Gif/Yvette, France and CEA, DSV, I2BM, Neurospin, Gif/Yvette, France;Université Paris-Sud 11, Orsay, France and Select Team, INRIA Saclay-Île-de-France, France;Parietal Team, INRIA Saclay-Île-de-France, Saclay, France and CEA, DSV, I2BM, Neurospin, Gif/Yvette, France

  • Venue:
  • MLMI'10 Proceedings of the First international conference on Machine learning in medical imaging
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The use of machine learning tools is gaining popularity in neuroimaging, as it provides a sensitive assessment of the information conveyed by brain images. In particular, finding regions of the brain whose functional signal reliably predicts some behavioral information makes it possible to better understand how this information is encoded or processed in the brain. However, such a prediction is performed through regression or classification algorithms that suffer from the curse of dimensionality, because a huge number of features (i.e. voxels) are available to fit some target, with very few samples (i.e. scans) to learn the informative regions. A commonly used solution is to regularize the weights of the parametric prediction function. However, model specification needs a careful design to balance adaptiveness and sparsity. In this paper, we introduce a novel method, Multi-Class Sparse Bayesian Regression (MCBR), that generalizes classical approaches such as Ridge regression and Automatic Relevance Determination. Our approach is based on a grouping of the features into several classes, where each class is regularized with specific parameters. We apply our algorithm to the prediction of a behavioral variable from brain activation images. The method presented here achieves similar prediction accuracies than reference methods, and yields more interpretable feature loadings.