A hierarchical architecture for adaptive brain-computer interfacing

  • Authors:
  • Mike Chung;Willy Cheung;Reinhold Scherer;Rajesh P. N. Rao

  • Affiliations:
  • Computer Science & Engineering, University of Washington, Seattle;Computer Science & Engineering, University of Washington, Seattle;Institute for Knowledge Discovery, Graz University of Technology, Graz, Austria;Computer Science & Engineering, University of Washington, Seattle

  • Venue:
  • IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Brain-computer interfaces (BCIs) allow a user to directly control devices such as cursors and robots using brain signals. Non-invasive BCIs, e.g., those based on electroencephalographic (EEG) signals recorded from the scalp, suffer from low signal-to-noise ratio which limits the bandwidth of control. Invasive BCIs allow fine-grained control but can leave users exhausted since control is typically exerted on a moment-by-moment basis. In this paper, we address these problems by proposing a new adaptive hierarchical architecture for brain-computer interfacing. The approach allows a user to teach the BCI new skills on-the-fly; these learned skills are later invoked directly as high-level commands, relieving the user of tedious low-level control. We report results from four subjects who used a hierarchical EEG-based BCI to successfully train and control a humanoid robot in a virtual home environment. Gaussian processes were used for learning high-level commands, allowing a BCI to switch between autonomous and user-guided modes based on the current estimate of uncertainty. We also report the first instance of multi-tasking in a BCI, involving simultaneous control of two different devices by a single user. Our results suggest that hierarchical BCIs can provide a flexible and robust way of controlling complex robotic devices in real-world environments.