Visual attention in auditory display

  • Authors:
  • Thorsten Mahler;Pierre Bayerl;Heiko Neumann;Michael Weber

  • Affiliations:
  • Department of Media Informatics;Department of Neuro Informatics, University of Ulm, Ulm, Germany;Department of Neuro Informatics, University of Ulm, Ulm, Germany;Department of Media Informatics

  • Venue:
  • PIT'06 Proceedings of the 2006 international tutorial and research conference on Perception and Interactive Technologies
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The interdisciplinary field of image sonification aims at the transformation of images to auditory signals. It brings together researchers from different fields of computer science like sound synthesizing, data mining and human computer interaction. Its goal is the use of sound and all its attributes to display the data sets itself and thus making the highly developed human aural system usable for data analysis. Unlike previous approaches we aim to sonify images of any kind. We propose that models of visual attention and visual grouping can be utilized to dynamically select relevant visual information to be sonified. For the auditory synthesis we employ an approach, which takes advantage of the sparseness of the selected input data. The presented approach proposes a combination of data sonification approaches, such as auditory scene generation, and models of human visual perception. It extends previous pixel-based transformation algorithms by incorporating mid-level vision coding and high-level control. The mapping utilizes elaborated sound parameters that allow non-trivial orientation and positioning in 3D space.