Semantic back-pointers from gesture

  • Authors:
  • Jacob Eisenstein

  • Affiliations:
  • MIT Computer Science and Artificial Intelligence Laboratory, MA

  • Venue:
  • NAACL-DocConsortium '06 Proceedings of the 2006 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume: doctoral consortium
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although the natural-language processing community has dedicated much of its focus to text, face-to-face spoken language is ubiquitous, and offers the potential for breakthrough applications in domains such as meetings, lectures, and presentations. Because spontaneous spoken language is typically more disfluent and less structured than written text, it may be critical to identify features from additional modalities that can aid in language understanding. However, due to the long-standing emphasis on text datasets, there has been relatively little work on nontextual features in unconstrained natural language (prosody being the most studied non-textual modality, e.g. (Shriberg et al., 2000)).