Annotation-based video enrichment for blind people: a pilot study on the use of earcons and speech synthesis

  • Authors:
  • Benoît Encelle;Magali Ollagnier-Beldame;Stéphanie Pouchot;Yannick Prié

  • Affiliations:
  • Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France;Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France;Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France;Université de Lyon, CNRS Université Lyon 1, LIRIS, UMR5205, F-69622, France, Lyon, France

  • Venue:
  • The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
  • Year:
  • 2011

Quantified Score

Hi-index 0.01

Visualization

Abstract

Our approach to address the question of online video accessibility for people with sensory disabilities is based on video annotations that are rendered as video enrichments during the playing of the video. We present an exploratory work that focuses on video accessibility for blind people with audio enrichments composed of speech synthesis and earcons (i.e. nonverbal audio messages). Our main results are that earcons can be used together with speech synthesis to enhance understanding of videos; that earcons should be accompanied with explanations; and that a potential side effect of earcons is related to video rhythm perception.