Multi-modal Sign Icon Retrieval for Augmentative Communication

  • Authors:
  • Chung-Hsien Wu;Yu-Hsien Chiu;Kung-Wei Cheng

  • Affiliations:
  • -;-;-

  • Venue:
  • PCM '01 Proceedings of the Second IEEE Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper addresses a multi-modal sign icon retrieval and prediction technology for generating sentences from ill-formed Taiwanese sign language (TSL) for people with speech or hearing impairments. The design and development of this PC-based TSL augmented and alternative communication (AAC) system aims to improve the input rate and accuracy of communication aids. This study focuses on 1) developing a effective TSL icon retrieval method, 2) investigating TSL prediction strategies for input rate enhancement, 3) using a predictive sentence template (PST) tree for sentence generation. The proposed system assists people with language disabilities in sentence formation. To evaluate the performance of our approach, a pilot study for clinical evaluation and education training was undertaken. The evaluation results show that the retrieval rate and subjective satistactory level for sentence generation was significantly improved.