Skip to Main Content
Advanced computing devices equipped with various wired and wireless network capabilities, built-in microphones and audio capture devices are becoming increasingly popular. At the same time, sophisticated signal processing algorithms for hands-free acoustic human-machine interfaces are being developed. Those algorithms are currently restricted to dedicated audio hardware, in part because they require perfectly synchronized audio data. Naive attempts to use the available audio devices for microphone array processing in a distributed wireless setting fails due to the algorithms' sensitivity to deviations in the sampling rates of the distributed devices. We propose a synchronization scheme, which combines the microphones of different spatially distributed computing devices to an acoustic ad-hoc network. The proposed scheme is capable to significantly compensate the sampling rate deviations of the different audio capture devices and we show that, as an example, blind source separation performs well with the synchronized data of distributed acoustic sensors.