Feature-based fusion of medical imaging data

  • Authors:
  • Vince D. Calhoun;Tülay Adali

  • Affiliations:
  • Mind Research Network and Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM;Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD

  • Venue:
  • IEEE Transactions on Information Technology in Biomedicine - Special section on computational intelligence in medical systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The acquisition of multiple brain imaging types for a given study is a very common practice. There have been a number of approaches proposed for combining or fusing multitask or multimodal information. These can be roughly divided into those that attempt to study convergence of multimodal imaging, for example, how function and structure are related in the same region of the brain, and those that attempt to study the complementary nature of modalities, for example, utilizing temporal EEG information and spatial functional magnetic resonance imaging information. Within each of these categories, one can attempt data integration (the use of one imaging modality to improve the results of another) or true data fusion (in which multiple modalities are utilized to inform one another). We review both approaches and present a recent computational approach that first preprocesses the data to compute features of interest. The features are then analyzed in a multivariate manner using independent component analysis. We describe the approach in detail and provide examples of how it has been used for different fusion tasks. We also propose a method for selecting which combination of modalities provides the greatest value in discriminating groups. Finally, we summarize and describe future research topics.