Examining the redundancy of multimodal input

  • Authors:
  • Natalie Ruiz;Ronnie Taib;Fang Chen

  • Affiliations:
  • Australian Technology Park, Sydney, Australia and University of New South Wales, Sydney, Australia;Australian Technology Park, Sydney, Australia and University of New South Wales, Sydney, Australia;Australian Technology Park, Sydney, Australia and University of New South Wales, Sydney, Australia

  • Venue:
  • OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Speech and gesture modalities can allow users to interact with complex applications in novel ways. Often users will adapt their multimodal behaviour to cope with increasing levels of domain complexity. These strategies can change how multimodal constructions are planned and executed by users. In the frame of Baddeley's Theory of Working Memory, we present some of the results from an empirical study conducted with users of a multimodal interface, under varying levels of cognitive load. In particular, we examine how multimodal behavioural features are sensitive to cognitive load variations. We report significant decreases in multimodal redundancy (33.6%) and trends of increased multimodal complementarity, as cognitive load increases.