Pitch and timbre manipulations using cortical representation of sound

  • Authors:
  • D. N. Zotkin;S. A. Shamma;P. Ru;R. Duraiswami;L. S. Davis

  • Affiliations:
  • Perceptual Interfaces & Reality Lab., Maryland Univ., College Park, MD, USA;Perceptual Interfaces & Reality Lab., Maryland Univ., College Park, MD, USA;Perceptual Interfaces & Reality Lab., Maryland Univ., College Park, MD, USA;Perceptual Interfaces & Reality Lab., Maryland Univ., College Park, MD, USA;Perceptual Interfaces & Reality Lab., Maryland Univ., College Park, MD, USA

  • Venue:
  • ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 3 (ICME '03) - Volume 03
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

The sound receiver at the ears is processed by humans using signal processing that separate the signal along intensity, pitch and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent signal along these attributes. In this paper we use a cortical representation to represent the manipulate sound. We briefly overview algorithms for obtaining, manipulating and inverting cortical representation of sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are first used to create sound of an instrument between a guitar and a trumpet. Applications to creating maximally separable sounds in auditory user interfaces are discussed.