Support Vector Machines and Neural Networks for the Alzheimer's Disease Diagnosis Using PCA

  • Authors:
  • M. López;J. Ramírez;J. M. Górriz;I. Álvarez;D. Salas-Gonzalez;F. Segovia;M. Gómez-Río

  • Affiliations:
  • Dept. of Signal Theory, Networking and Communications, University of Granada, Spain;Dept. of Signal Theory, Networking and Communications, University of Granada, Spain;Dept. of Signal Theory, Networking and Communications, University of Granada, Spain;Dept. of Signal Theory, Networking and Communications, University of Granada, Spain;Dept. of Signal Theory, Networking and Communications, University of Granada, Spain;Dept. of Signal Theory, Networking and Communications, University of Granada, Spain;Department of Nuclear Medicine, Hospital Universitario Virgen de las Nieves, Granada, Spain

  • Venue:
  • IWINAC '09 Proceedings of the 3rd International Work-Conference on The Interplay Between Natural and Artificial Computation: Part II: Bioinspired Applications in Artificial and Natural Computation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the Alzheimer's Disease (AD) diagnosis process, functional brain images such as Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) have been widely used to guide the clinicians. However, the current evaluation of these images entails a succession of manual reorientations and visual interpretation steps, which attach in some way subjectivity to the diagnostic. In this work, two pattern recognition methods have been applied to SPECT and PET images in order to obtain an objective classifier which is able to determine whether the patient suffers from AD or not. A common feature selection stage is first described, where Principal Component Analysis (PCA) is applied over the data to drastically reduce the dimension of the feature space, followed by the study of neural networks and support vector machines (SVM) classifiers. The achieved accuracy results reach 98.33% and 93.41% for PET and SPECT respectively, which means a significant improvement over the results obtained by the classical Voxels-As-Features (VAF) reference approach.