Talking points: the differential impact of real-time computer generated audio/visual feedback on speech-like & non-speech-like vocalizations in low functioning children with ASD

  • Authors:
  • Joshua Hailpern;Karrie Karahalios;Laura DeThorne;James Halle

  • Affiliations:
  • University of Illinois, Urbana, IL, USA;University of Illinois, Urbana, IL, USA;University of Illinois, Champaign, IL, USA;University of Illinois, Champaign, IL, USA

  • Venue:
  • Proceedings of the 11th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Real-time computer feedback systems (CFS) have been shown to impact the communication of neurologically typical individuals. Promising new research appears to suggest the same for the vocalization of low functioning children with Autistic Spectrum Disorder (ASD). The distinction between speech-like versus non-speech-like vocalizations has rarely, if ever, been addressed in the HCI community. This distinction is critical as we strive to most effectively and efficiently facilitate speech development in children with ASD, while simultaneously helping decrease vocalizations that do not facilitate positive social interactions. This paper provided an extension of Hailpern et al. in 2009 by examining the influence of a computerized feedback system on both the speech-like and non-speech-like vocalizations of five nonverbal children with ASD. Results were largely positive, in that some form of computerized feedback was able to differentially facilitate speech-like vocalizations relative to nonspeech-like vocalizations in 4 of the 5 children. The main contribution of this work is in highlighting the importance of distinguishing between speech-like versus nonspeech-like vocalizations in the design of feedback systems focused on facilitating speech in similar populations.