By Topic

Feature Extraction for the Prediction of Multichannel Spatial Audio Fidelity

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
S. George ; Inst. of Sound Recording, Surrey Univ., Guildford ; S. Zielinski ; F. Rumsey

This paper seeks to present an algorithm for the prediction of frontal spatial fidelity and surround spatial fidelity of multichannel audio, which are two attributes of the subjective parameter called basic audio quality. A number of features chosen to represent spectral and spatial changes were extracted from a set of recordings and used in a regression model as independent variables for the prediction of spatial fidelities. The calibration of the model was done by ridge regression using a database of scores obtained from a series of formal listening tests. The statistically significant features based on interaural cross correlation and spectral features found from an initial model were employed to build a simplified model and these selected features were validated. The results obtained from the validation experiment were highly correlated with the listening test scores and had a low standard error comparable to that encountered in typical listening tests. The applicability of the developed algorithm is limited to predicting the basic audio quality of low-pass filtered and down-mixed recordings (as obtained in listening tests based on a multistimulus test paradigm with reference and two anchors: a 3.5-kHz low-pass filtered signal and a mono signal)

Published in:

IEEE Transactions on Audio, Speech, and Language Processing  (Volume:14 ,  Issue: 6 )