A general framework for incremental processing of multimodal inputs

  • Authors:
  • Afshin Ameri Ekhtiarabadi;Batu Akan;Baran Çürüklu;Lars Asplund

  • Affiliations:
  • Mälardalen University, Västerås, Sweden;Mälardalen University, Västerås, Sweden;Mälardalen University, Västerås, Sweden;Mälardalen University, Västerås, Sweden

  • Venue:
  • ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Humans employ different information channels (modalities) such as speech, pictures and gestures in their communication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.