Exploring true multi-user multimodal interaction over a digital table

  • Authors:
  • Edward Tse;Saul Greenberg;Chia Shen;Clifton Forlines;Ryo Kodama

  • Affiliations:
  • University of Calgary, Calgary, Canada;University of Calgary, Calgary, Canada;Mitsubishi Electric Research Laboratories, Cambridge, MA;Mitsubishi Electric Research Laboratories, Cambridge, MA;Mitsubishi Electric Research Laboratories, Cambridge, MA

  • Venue:
  • Proceedings of the 7th ACM conference on Designing interactive systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system. First, parallel work is affected by the design of multimodal commands. Second, individual mode switches can be confusing to collaborators, especially if speech commands are used. Third, establishing personal and group territories can hinder particular tasks that require artefact neutrality. Finally, timing needs to be considered when designing joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.