Audemes at work: Investigating features of non-speech sounds to maximize content recognition

  • Authors:
  • Mexhid Ferati;Mark S. Pfaff;Steve Mannheimer;Davide Bolchini

  • Affiliations:
  • Indiana University School of Informatics, 535 West Michigan St. Indianapolis, IN 46202, United States;Indiana University School of Informatics, 535 West Michigan St. Indianapolis, IN 46202, United States;Indiana University School of Informatics, 535 West Michigan St. Indianapolis, IN 46202, United States;Indiana University School of Informatics, 535 West Michigan St. Indianapolis, IN 46202, United States

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

To access interactive systems, blind users can leverage their auditory senses by using non-speech sounds. The structure of existing non-speech sounds, however, is geared toward conveying atomic operations at the user interface (e.g., opening a file) rather than evoking broader, theme-based content typical of educational material (e.g., an historical event). To address this problem, we investigate audemes, a new category of non-speech sounds whose semiotic structure and flexibility open new horizons for the aural interaction with content-rich applications. Three experiments with blind participants examined the attributes of an audeme that most facilitate the accurate recognition of their meaning. A sequential concatenation of different sound types (music, sound effect) yielded the highest meaning recognition, whereas an overlapping arrangement of sounds of the same type (music, music) yielded the lowest meaning recognition. We discuss seven guidelines to design well-formed audemes.