Skip to Main Content
Future multimedia systems will use multiple audio and video input and output streams to enhance user experience. Those multiple input streams may be captured using a network of distributed sensors and transmitted to a central location for processing. We address the problem of efficient joint compression of audio sources that are noisy filtered versions of the same audio signal. Deployed in a wireless bandwidth constrained network, communication between the sources, if any, is restricted to a bare minimum. By exploiting the correlations between the remote sources, we develop algorithms for distributed compression of these audio sources, attempting to achieve the gains predicted in theory. Our scheme shows a significant improvement in reconstructed signal quality for a given bandwidth as compared to an independent compression approach. The algorithms are based on the distributed source coding using syndromes (DISCUS) framework and incorporate the use of perceptual masks.
Date of Conference: 19-22 Oct. 2003