MultiML: a general purpose representation language for multimodal human utterances

  • Authors:
  • Manuel Giuliani;Alois Knoll

  • Affiliations:
  • Technische Universität München, München, Germany;Technische Universität München, München, Germany

  • Venue:
  • ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present MultiML, a markup language for the annotation of multimodal human utterances. MultiML is able to represent input from several modalities, as well as the relationships between these modalities. Since MultiML separates general parts of representation from more context-specific aspects, it can easily be adapted for use in a wide range of contexts. This paper demonstrates how speech and gestures are described with MultiML, showing the principles - including hierarchy and underspecification - that ensure the quality and extensibility of MultiML. As a proof of concept, we show how MultiML is used to annotate a sample human-robot interaction in the domain of a multimodal joint-action scenario.