2009 Special Issue: Single-trial classification of vowel speech imagery using common spatial patterns

  • Authors:
  • Charles S. DaSalla;Hiroyuki Kambara;Makoto Sato;Yasuharu Koike

  • Affiliations:
  • Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama, Japan and Core Research for Evolution Science and Technology (CREST), Japan Science and Techn ...;Precision and Intelligence Laboratory, Tokyo Institute of Technology, Yokohama, Japan and Core Research for Evolution Science and Technology (CREST), Japan Science and Technology Agency, Kawaguchi ...;Precision and Intelligence Laboratory, Tokyo Institute of Technology, Yokohama, Japan;Precision and Intelligence Laboratory, Tokyo Institute of Technology, Yokohama, Japan and Core Research for Evolution Science and Technology (CREST), Japan Science and Technology Agency, Kawaguchi ...

  • Venue:
  • Neural Networks
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the goal of providing a speech prosthesis for individuals with severe communication impairments, we propose a control scheme for brain-computer interfaces using vowel speech imagery. Electroencephalography was recorded in three healthy subjects for three tasks, imaginary speech of the English vowels /a/ and /u/, and a no action state as control. Trial averages revealed readiness potentials at 200 ms after stimulus and speech related potentials peaking after 350 ms. Spatial filters optimized for task discrimination were designed using the common spatial patterns method, and the resultant feature vectors were classified using a nonlinear support vector machine. Overall classification accuracies ranged from 68% to 78%. Results indicate significant potential for the use of vowel speech imagery as a speech prosthesis controller.