MURF: Mutually Reinforcing Multi-Modal Image Registration and Fusion | IEEE Journals & Magazine | IEEE Xplore

Scheduled Maintenance: On Tuesday, May 20, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (6:00-10:00 PM UTC). During this time, there may be intermittent impact on performance. We apologize for any inconvenience.

MURF: Mutually Reinforcing Multi-Modal Image Registration and Fusion


Abstract:

Existing image fusion methods are typically limited to aligned source images and have to “tolerate” parallaxes when images are unaligned. Simultaneously, the large varian...Show More

Abstract:

Existing image fusion methods are typically limited to aligned source images and have to “tolerate” parallaxes when images are unaligned. Simultaneously, the large variances between different modalities pose a significant challenge for multi-modal image registration. This study proposes a novel method called MURF, where for the first time, image registration and fusion are mutually reinforced rather than being treated as separate issues. MURF leverages three modules: shared information extraction module (SIEM), multi-scale coarse registration module (MCRM), and fine registration and fusion module (F2M). The registration is carried out in a coarse-to-fine manner. During coarse registration, SIEM first transforms multi-modal images into mono-modal shared information to eliminate the modal variances. Then, MCRM progressively corrects the global rigid parallaxes. Subsequently, fine registration to repair local non-rigid offsets and image fusion are uniformly implemented in F2M. The fused image provides feedback to improve registration accuracy, and the improved registration result further improves the fusion result. For image fusion, rather than solely preserving the original source information in existing methods, we attempt to incorporate texture enhancement into image fusion. We test on four types of multi-modal data (RGB-IR, RGB-NIR, PET-MRI, and CT-MRI). Extensive registration and fusion results validate the superiority and universality of MURF.
Page(s): 12148 - 12166
Date of Publication: 07 June 2023

ISSN Information:

PubMed ID: 37285256

Funding Agency:


I. Introduction

Due to the limitations of hardware devices, images from one type of sensor can merely characterize partial information. For instance, the reflected light information captured by visible sensors can describe scene textures while it is susceptible to light and shading. Complementarily, the thermal radiation information captured by infrared sensors is insensitive to light and can reflect the essential attributes of scenes and objects. Multi-modal image fusion aims to synthesize a single image by integrating complementary source information from different types of sensors. As shown in Fig. 1, the single fused image exhibits better scene representation and visual perception, which can benefit various subsequent tasks, such as semantic segmentation [1], object detection, and tracking [2], scene understanding [3], etc. Therefore, image fusion has a wide variety of applications, from security to industrial and civilian fields [4], [5].

Contact IEEE to Subscribe

References

References is not available for this document.