Aurally and visually enhanced audio search with soundtorch

  • Authors:
  • Sebastian Heise;Michael Hlatky;Jörn Loviscach

  • Affiliations:
  • Hochschule Bremen (University of Applied Sciences), Bremen, UNK, Germany;Hochschule Bremen (University of Applied Sciences), Bremen, UNK, Germany;Hochschule Bremen (University of Applied Sciences), Bremen, UNK, Germany

  • Venue:
  • CHI '09 Extended Abstracts on Human Factors in Computing Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Finding a specific or an artistically appropriate sound in a vast collection comprising thousands of audio files containing recordings of, say, footsteps, gunshots, and thunderclaps easily becomes a chore. To improve on this, we have developed an enhanced auditory and graphical zoomable user interface that leverages the human brain's capability to single out sounds from a spatial mixture: The user shines a virtual flashlight onto an automatically created 2D arrangement of icons that represent sounds. All sounds within the light cone are played back in parallel through a surround sound system. A GPU-accelerated visualization facilitates identifying the icons on the screen with acoustic items in the dense cloud of sound. Test show that the user can pick the "right" sounds more quickly and/or with more fun than with standard file-by-file auditioning.