Skip to Main Content
Adaptive beamforming of sensor arrays immersed into reverberant fields can easily result in the cancellation of the useful signal because of the temporal correlation existing among the direct and the reflected path signals. Wideband beamforming can somewhat mitigate this phenomenon, but adaptive solutions based on the minimum variance (MV) criterion remain nonrobust in many practical applications, such as multimedia systems, underwater acoustics, and seismic prospecting. In this paper, a steered wideband adaptive beamformer, optimized by a novel concentrated maximum likelihood (ML) criterion in the frequency domain, is presented and discussed in the light of a very general reverberation model. It is shown that ML beamforming can alleviate the typical cancellation problems encountered by adaptive MV beamforming and preserve the intelligibility of a wideband and colored source signal under interference, reverberation, and propagation mismatches. The difficult optimization of the ML cost function, which incorporates a robustness constraint to prevent signal cancellation, is recast as an iterative least squares problem through the concept of descent in the neuron space, which was originally developed for the training of multilayer neural networks. Finally, experiments with computer-generated and real-world data demonstrate the superior performance of the proposed beamformer with respect to its MV counterpart.