Sparse spectrotemporal coding of sounds

  • Authors:
  • David J. Klein;Peter König;Konrad P. Körding

  • Affiliations:
  • Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse, Zurich, Switzerland;Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse, Zurich, Switzerland;Institute of Neurology, University College London, Queen square, London, UK

  • Venue:
  • EURASIP Journal on Applied Signal Processing
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent studies of biological auditory processing have revealed that sophisticated spectrotemporal analyses are performed by central auditory systems of various animals. The analysis is typically well matched with the statistics of relevant natural sounds, suggesting that it produces an optimal representation of the animal's acoustic biotope. We address this topic using simulated neurons that learn an optimal representation of a speech corpus. As input, the neurons receive a spectrographic representation of sound produced by a peripheral auditory model. The output representation is deemed optimal when the responses of the neurons are maximally sparse. Following optimization, the simulated neurons are similar to real neurons in many respects. Most notably, a given neuron only analyzes the input over a localized region of time and frequency. In addition, multiple subregions either excite or inhibit the neuron, together producing selectivity to spectral and temporal modulation patterns. This suggests that the brain's solution is particularly well suited for coding natural sound; therefore, it may prove useful in the design of new computational methods for processing speech.