A spatial feature extraction and regularization model for virtual auditory display

  • Authors:
  • Jiashu Chen;Barry D. Van Veen;Kurt E. Hecox

  • Affiliations:
  • University of Wisconsin-Madison, Madison, WI;University of Wisconsin-Madison, Madison, WI;University of Wisconsin-Madison, Madison, WI

  • Venue:
  • ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: plenary, special, audio, underwater acoustics, VLSI, neural networks - Volume I
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper a spatial feature extraction and regularization model is developed to represent free-field-to-eardrum transfer functions (FETFs). A Karhunen-Loève expansion is used to derive a low dimensional eigen-transfer function (EF) subspace for the measured FETFs. The coordinates of each FETF in the subspace are determined by projecting all measured FETFs onto the EFs. These coordinates represent samples of the FETFs' spatial features. Functional representations of the spatial features, termed Spatial Characteristic Functions (SCFs), are obtained by applying a thin-plate generalized spline smoothing model to regularize the samples. A functional representation for the FETF is thus obtained by linearly combining the EFs with the SCFs. Typical errors between the measured and modeled FETFs for a KEMAR are on the order of hundredth of 1%.