Multi-modal multi-task learning for joint prediction of clinical scores in Alzheimer's disease

  • Authors:
  • Daoqiang Zhang;Dinggang Shen

  • Affiliations:
  • Dept. of Radiology and BRIC, University of North Carolina at Chapel Hill, NC and Dept. of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China;Dept. of Radiology and BRIC, University of North Carolina at Chapel Hill, NC

  • Venue:
  • MBIA'11 Proceedings of the First international conference on Multimodal brain image analysis
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

One recent interest in computer-aided diagnosis of neurological diseases is to predict the clinical scores from brain images. Most existing methods usually estimate multiple clinical variables separately, without considering the useful correlation information among them. On the other hand, nearly all methods use only one modality of data (mostly structural MRI) for regression, and thus ignore the complementary information among different modalities. To address these issues, in this paper, we present a general methodology, namely Multi-Modal Multi-Task (M3T) learning, to jointly predict multiple variables from multi-modal data. Our method contains three major subsequent steps: (1) a multi-task feature selection which selects the common subset of relevant features for the related multiple clinical variables from each modality; (2) a kernel-based multimodal data fusion which fuses the above-selected features from all modalities; (3) a support vector regression which predicts multiple clinical variables based on the previously learnt mixed kernel. Experimental results on ADNI dataset with both imaging modalities (MRI and PET) and biological modality (CSF) validate the efficacy of the proposed M3T learning method.