A comparative evaluation of auditory-visual mappings for sound visualisation

  • Authors:
  • Kostas Giannakis

  • Affiliations:
  • P.O. Box 60572, Athens 153 05, Greece. E-mail: kgiannakis@mixedupsenses.com

  • Venue:
  • Organised Sound
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The significant role of visual communication in modern computer applications is indisputable. In the case of music, various attempts have been made from time to time to translate non-visual ideas into visual codes (see Walters 1997 for a collection of graphic scores from the late computer music pioneer Iannis Xenakis, John Cage, Karlheinz Stockhausen, and others). In computer music research, most current sound design tools allow the direct manipulation of visual representations of sound such as time-domain and frequency-domain representations, with the most notable examples being the UPIC system (Xenakis 1992), Phonogramme (Lesbros 1996), Lemur (Fitz and Haken 1997), and MetaSynth (Wenger 1998), among others. Associations between auditory and visual dimensions have also been extensively studied in other scientific domains such as visual perception and cognitive psychology, as well as inspired new forms of artistic expression (see, for example, Wells 1980; Goldberg and Schrack 1986; Whitney 1991).