An integrated approach of 3D sound rendering techniques for sound externalization

  • Authors:
  • Yong Guk Kim;Chan Jun Chun;Hong Kook Kim;Yong Ju Lee;Dae Young Jang;Kyeongok Kang

  • Affiliations:
  • School of Information and Communications, Gwangju Institute of Science and Technology, Gwangju, Korea;School of Information and Communications, Gwangju Institute of Science and Technology, Gwangju, Korea;School of Information and Communications, Gwangju Institute of Science and Technology, Gwangju, Korea;Electronics and Telecommunications Research Institute, Daejeon, Korea;Electronics and Telecommunications Research Institute, Daejeon, Korea;Electronics and Telecommunications Research Institute, Daejeon, Korea

  • Venue:
  • PCM'10 Proceedings of the Advances in multimedia information processing, and 11th Pacific Rim conference on Multimedia: Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a sound externalization method is proposed for out-of the head localization in headphone listening environments. Several externalization methods have been proposed that use either a head-related transfer function (HRTF) or early reflections. However, such conventional methods have drawbacks, e.g., timbre distortion due to the use of a measured HRTF or reverberation. On the other hand, the proposed externalization method integrates a model-based HRTF with reverberation. In addition, for improving frontal externalization performance, techniques such as decorrelation and spectral notch filtering are included. To evaluate the performance of the proposed externalization method, subjective listening tests are conducted by using different kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the test results that the proposed externalization method can localize sound sources farther away from out of the head than conventional methods.