By Topic

Adaptive Fusion of Multimodal Surveillance Image Sequences in Visual Sensor Networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Dejan Drajic ; Ericsson d.o.o., Belgrade ; Nedeljko Cvejic

In this paper we present a novel method of fusing of the sequences of images obtained from multimodal surveillance cameras and subject to distortions typical for visual sensor networks environment. The proposed fusion method uses the structural similarity measure (SSIM) to measure a level of noise in regions of a received image in order to optimize the selection of regions in the fused image. The region-based image fusion algorithm using the dual-tree complex wavelet transform (DT-CWT) is used to fuse the selected regions. The performance of the proposed method was extensively tested for a number of multimodal surveillance image sequences and proposed method outperformed the state-of-the-art algorithms, increasing significantly the quality of the fused image, both visually and in terms of the Petrovic image fusion metric.

Published in:

IEEE Transactions on Consumer Electronics  (Volume:53 ,  Issue: 4 )