By Topic

Robust Minimum Sidelobe Beamforming for Spherical Microphone Arrays

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Haohai Sun ; Dept. of Electron. & Telecommun., Norwegian Univ. of Sci. & Technol., Trondheim, Norway ; Shefeng Yan ; Svensson, U.P.

A robust minimum sidelobe beamforming approach based on the spherical harmonics framework for spherical microphone arrays is proposed. It minimizes the peaks of sidelobes while keeping the distortionless response in the look direction and maintaining the mainlobe width. A white noise gain constraint is also derived and employed to improve the robustness against array errors. The resulting beamformer can provide optimal tradeoff between the sidelobe level, the beamwidth and robustness, so it could be more practical than the existing spherical array Dolph-Chebyshev modal beamformer in the presence of array errors. The optimal modal beamforming problem is formulated as a tractable convex second-order cone programming program, which is more efficient than conventional element-space based approaches, since the dimension of array weight vectors can be significantly decreased by using the properties of spherical harmonics and Legendre polynomials. For the purpose of performance comparison, we also formulate current robust modal beamformers as equivalent optimization problems based on the proposed array model. Numerical results show the high flexibility and efficiency of the proposed beamforming approach.

Published in:

Audio, Speech, and Language Processing, IEEE Transactions on  (Volume:19 ,  Issue: 4 )