Auditory time-frequency masking: psychoacoustical data and application to audio representations

  • Authors:
  • Thibaud Necciari;Peter Balazs;Richard Kronland-Martinet;Sølvi Ystad;Bernhard Laback;Sophie Savel;Sabine Meunier

  • Affiliations:
  • Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria, Laboratoire de Mécanique et d'Acoustique, CNRS-UPR 7051, Aix-Marseille Univ., Centrale Marseille, Marseille Cedex ...;Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria;Laboratoire de Mécanique et d'Acoustique, CNRS-UPR 7051, Aix-Marseille Univ., Centrale Marseille, Marseille Cedex 20, France;Laboratoire de Mécanique et d'Acoustique, CNRS-UPR 7051, Aix-Marseille Univ., Centrale Marseille, Marseille Cedex 20, France;Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria;Laboratoire de Mécanique et d'Acoustique, CNRS-UPR 7051, Aix-Marseille Univ., Centrale Marseille, Marseille Cedex 20, France;Laboratoire de Mécanique et d'Acoustique, CNRS-UPR 7051, Aix-Marseille Univ., Centrale Marseille, Marseille Cedex 20, France

  • Venue:
  • CMMR'11 Proceedings of the 8th international conference on Speech, Sound and Music Processing: embracing research in India
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, the results of psychoacoustical experiments on auditory time-frequency (TF) masking using stimuli (masker and target) with maximal concentration in the TF plane are presented. The target was shifted either along the time axis, the frequency axis, or both relative to the masker. The results show that a simple superposition of spectral and temporal masking functions does not provide an accurate representation of the measured TF masking function. This confirms the inaccuracy of simple models of TF masking currently implemented in some perceptual audio codecs. In the context of audio signal processing, the present results constitute a crucial basis for the prediction of auditory masking in the TF representations of sounds. An algorithm that removes the inaudible components in the wavelet transform of a sound while causing no audible difference to the original sound after re-synthesis is proposed. Preliminary results are promising, although further development is required.