Multimodal annotation of conversational data

  • Authors:
  • P. Blache;R. Bertrand;B. Bigi;E. Bruno;E. Cela;R. Espesser;G. Ferré;M. Guardiola;D. Hirst;E.-P. Magro;J.-C. Martin;C. Meunier;M.-A. Morel;E. Murisasco;I Nesterenko;P. Nocera;B. Pallaud;L. Prévot;B. Priego-Valverde;J. Seinturier;N. Tan;M. Tellier;S. Rauzy

  • Affiliations:
  • LPL-CNRS-Université de Provence;LPL-CNRS-Université de Provence;LPL-CNRS-Université de Provence;LSIS-CNRS-Université de Toulon;RFC-Université, Paris;LPL-CNRS-Université de Provence;LLING-Université de Nantes;LPL-CNRS-Université de Provence;LPL-CNRS-Université de Provence;RFC-Université, Paris;LIMSI-CNRS-Université Paris Sud;LPL-CNRS-Université de Provence;RFC-Université, Paris;LSIS-CNRS-Université de Toulon;LPL-CNRS-Université de Provence;LIA-Université d'Avignon;LPL-CNRS-Université de Provence;LPL-CNRS-Université de Provence;LPL-CNRS-Université de Provence;LSIS-CNRS-Université de Toulon;LIMSI-CNRS-Université Paris Sud;LPL-CNRS-Université de Provence;LPL-CNRS-Université de Provence

  • Venue:
  • LAW IV '10 Proceedings of the Fourth Linguistic Annotation Workshop
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose in this paper a broad-coverage approach for multimodal annotation of conversational data. Large annotation projects addressing the question of multimodal annotation bring together many different kinds of information from different domains, with different levels of granularity. We present in this paper the first results of the OTIM project aiming at developing conventions and tools for multimodal annotation.