Skip to Main Content
Recent advances in wireless visual sensor technology, have been calling for innovative architectures realizing efficient video coding under stringent processing and energy restrictions. Driven by profound findings in network information theory, Wyner-Ziv video coding constitutes a suitable paradigm for video sensor networks. This work presents a novel hash-driven Wyner-Ziv video coding architecture for visual sensors, which coarsely encodes a low resolution version of each Wyner-Ziv frame to facilitate accurate motion-compensated prediction at the decoder. The proposed method for side-information generation comprises hash-based multi-hypothesis pixel-based prediction. Once critical Wyner-Ziv information is decoded, the derived dense motion field is further enhanced. Experimental validation illustrates that the proposed hash-driven codec achieves significant compression gains with respect to state-of-the-art Wyner-Ziv video coding, even under demanding conditions.