Decoding of EEG activity from object views: active detection vs. passive visual tasks

  • Authors:
  • Sudhir Sasane;Lars Schwabe

  • Affiliations:
  • Dept. of Computer Science and Electrical Engineering, Adaptive and Regenerative Software Systems, Universität Rostock, Rostock, Germany;Dept. of Computer Science and Electrical Engineering, Adaptive and Regenerative Software Systems, Universität Rostock, Rostock, Germany

  • Venue:
  • BI'12 Proceedings of the 2012 international conference on Brain Informatics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Brain-computing interfaces (BCIs), which sense brain activity via electroencephalography (EEG), have principled limitations as they measure only the collective activity of many neurons. As a consequence, EEG-based BCIs need to employ carefully designed paradigms to circumvent these limitations. We were motivated by recent findings from the decoding of visual perception from functional magnetic resonance imaging (fMRI) to test if visual stimuli could also be decoded from EEG activity. We designed an experimental study, where subjects visually inspected computer-generated views of objects in two tasks: an active detection task and a passive viewing task. The first task triggers a robust P300 EEG response, which we use for single trial decoding as well as a "yardstick" for the decoding of visually evoked responses. We find that decoding in the detection task works reliable (approx. 72%), given that it is performed on single trials. We also find, however, that visually evoked responses in the passive task can be decoded clearly above chance level (approx. 60%). Our results suggest new directions for improving EEG-based BCIs, which rely on visual stimulation, such as as P300- or SSVEP-based BCIs, by carefully designing the visual stimuli and exploiting the contribution of decoding responses in the visual system as compared to relying only on, for example, P300 responses.