A time-frequency convolutional neural network for the offline classification of steady-state visual evoked potential responses

  • Authors:
  • Hubert Cecotti

  • Affiliations:
  • Institute of Automation (IAT), University of Bremen, Otto-Hahn-Allee, NW1, 28359 Bremen, Germany

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2011

Quantified Score

Hi-index 0.10

Visualization

Abstract

A new convolutional neural network architecture is presented. It includes the fast Fourier transform between two hidden layers to switch the signal analysis from the time domain to the frequency domain inside the network. This technique allows the signal classification without any special pre-processing and uses knowledge from the problem in the network topology. The first step allows the creation of different spatial and time filters. The second step is dedicated to the signal transformation in the frequency domain. The last step is the classification. The system is tested offline on the classification of EEG signals that contain steady-state visual evoked potential (SSVEP) responses. The mean recognition rate of the classification of five different types of SSVEP response is 95.61% on a time segment length of 1s. The proposed strategy outperforms other classical neural network architecures.