Taming recognition errors with a multimodal interface
Communications of the ACM
Usability Engineering
Brain-Computer Interfacing for Intelligent Systems
IEEE Intelligent Systems
Sphinx-4: a flexible open source framework for speech recognition
Sphinx-4: a flexible open source framework for speech recognition
Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction
Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction
NeuroWander: a BCI game in the form of interactive fairy tale
Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing - Adjunct
Hi-index | 0.01 |
Providing multiple modalities to users is known to improve the overall performance of an interface. Weakness of one modality can be overcome by the strength of another one. Moreover, with respect to their abilities, users can choose between the modalities to use the one that is the best for them. In this paper we explored whether this holds for direct control of a computer game which can be played using a brain-computer interface (BCI) and an automatic speech recogniser (ASR). Participants played the games in unimodal mode (i.e. ASR-only and BCI-only) and multimodal mode where they could switch between the two modalities. The majority of the participants switched modality during the multimodal game but for the most of the time they stayed in ASR control. Therefore multimodality did not provide a significant performance improvement over unimodal control in our particular setup. We also investigated the factors which influence modality switching. We found that performance and peformance-related factors were prominently effective in modality switching.