Model sparsity and brain pattern interpretation of classification models in neuroimaging

  • Authors:
  • Peter M. Rasmussen;Lars K. Hansen;Kristoffer H. Madsen;Nathan W. Churchill;Stephen C. Strother

  • Affiliations:
  • DTU Informatics, Technical University of Denmark, Kgs. Lyngby, Denmark and The Danish National Research Foundation's Center for Functionally Integrative Neuroscience, Aarhus University Hospital, D ...;DTU Informatics, Technical University of Denmark, Kgs. Lyngby, Denmark;Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Denmark;Rotman Research Institute of Baycrest Centre, Toronto, Canada and Department of Medical Biophysics, University of Toronto, Canada;Rotman Research Institute of Baycrest Centre, Toronto, Canada and Department of Medical Biophysics, University of Toronto, Canada

  • Venue:
  • Pattern Recognition
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Interest is increasing in applying discriminative multivariate analysis techniques to the analysis of functional neuroimaging data. Model interpretation is of great importance in the neuroimaging context, and is conventionally based on a 'brain map' derived from the classification model. In this study we focus on the relative influence of model regularization parameter choices on both the model generalization, the reliability of the spatial patterns extracted from the classification model, and the ability of the resulting model to identify relevant brain networks defining the underlying neural encoding of the experiment. For a support vector machine, logistic regression and Fisher's discriminant analysis we demonstrate that selection of model regularization parameters has a strong but consistent impact on the generalizability and both the reproducibility and interpretable sparsity of the models for both @?"2 and @?"1 regularization. Importantly, we illustrate a trade-off between model spatial reproducibility and prediction accuracy. We show that known parts of brain networks can be overlooked in pursuing maximization of classification accuracy alone with either @?"2 and/or @?"1 regularization. This supports the view that the quality of spatial patterns extracted from models cannot be assessed purely by focusing on prediction accuracy. Our results instead suggest that model regularization parameters must be carefully selected, so that the model and its visualization enhance our ability to interpret the brain.