Skip to Main Content
This paper introduces a continuous functional model for head-related transfer functions (HRTFs) in the horizontal auditory scene. The approach uses a separable representation consisting of a Fourier-Bessel series expansion for the spectral components and a conventional Fourier series expansion for the spatial components. Being independent of the data, these two sets of basis functions remain unchanged for all subjects and measurement setups. Hence, the model can transform an individualized HRTF to a subject specific set of coefficients. A continuous functional model is also developed in the time domain. We show the efficient model performance in approximating experimental measurements by using the HRTF measurements from a KEMAR manikin and the synthetic data from the spherical head model. The statistical results are determined from a 50-subject HRTF data set. We also corroborate the predictive capability of the proposed model. The model has near optimal performance, which can be ascertained by comparison with the standard principle component analysis (PCA) and discrete Karhunen-Loeve expansion (KLE) methods at the measurement points and for a given number of parameters.
Audio, Speech, and Language Processing, IEEE Transactions on (Volume:17 , Issue: 4 )
Date of Publication: May 2009