A multimodal vocabulary for augmentative and alternative communication from sound/image label datasets

  • Authors:
  • Xiaojuan Ma;Christiane Fellbaum;Perry R. Cook

  • Affiliations:
  • Princeton University, Princeton, NJ;Princeton University, Princeton, NJ;Princeton University, Princeton, NJ

  • Venue:
  • SLPAT '10 Proceedings of the NAACL HLT 2010 Workshop on Speech and Language Processing for Assistive Technologies
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Existing Augmentative and Alternative Communication vocabularies assign multimodal stimuli to words with multiple meanings. The ambiguity hampers the vocabulary effectiveness when used by people with language disabilities. For example, the noun "a missing letter" may refer to a character or a written message, and each corresponds to a different picture. A vocabulary with images and sounds unambiguously linked to words can better eliminate misunderstanding and assist communication for people with language disorders. We explore a new approach of creating such a vocabulary via automatically assigning semantically unambiguous groups of synonyms to sound and image labels. We propose an un-supervised word sense disambiguation (WSD) voting algorithm, which combines different semantic relatedness measures. Our voting algorithm achieved over 80% accuracy with a sound label dataset, which significantly outperforms WSD with individual measures. We also explore the use of human judgments of evocation between members of concept pairs, in the label disambiguation task. Results show that evocation achieves similar performance to most of the existing relatedness measures.