Multimodal input for meeting browsing and retrieval interfaces: preliminary findings

  • Authors:
  • Agnes Lisowska;Susan Armstrong

  • Affiliations:
  • ISSCO/TIM/ETI, University of Geneva, Geneva, Switzerland;ISSCO/TIM/ETI, University of Geneva, Geneva, Switzerland

  • Venue:
  • MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we discuss the results of user-based experiments to determine whether multimodal input to an interface for browsing and retrieving multimedia meetings gives users added value in their interactions. We focus on interaction with the Archivus interface using mouse, keyboard, voice and touchscreen input. We find that voice input in particular appears to give added value, especially when used in combination with more familiar modalities such as the mouse and keyboard. We conclude with a discussion of some of the contributing factors to these findings and directions for future work.