Set-Valued Mapping Cloud Model and its Application for Fusion Algorithm Selection of Dual Mode Infrared Images

It is always a goal to fully exploit difference features and their importance in infrared polarization and intensity images to drive selection of fusion algorithms rather than using fixed algorithm, to improve pertinence and effectiveness of image fusion. However, it is difficult to obtain a better and more reasonable result for fused image due to the varied difference features and uncertain relationship between difference features and fusion algorithms. The study aims to investigate the two-tuple linguistic cloud model for possibility distribution to solve the fusion algorithm selection problem. Firstly, fusion validity distributions of difference feature amplitudes are constructed through giving consideration to both fuzziness and randomness. Secondly, in order to build set-valued mapping between difference feature amplitudes and fusion algorithms, a novel fusion validity degree transformation method into two-tuple linguistic cloud variable is proposed based on possibility theory. Thirdly, difference feature weights are calculated based on nonparametric estimation for the frequency of each corresponding feature in images. Next, a cloud weighted arithmetic averaging operator is constructed to rank the algorithms. Finally, a case of the fusion algorithm selection is provided to illustrate the implementation process and applicability of the method proposed in this paper. In addition, the proposed method can be utilized the comprehensive and multi-constrained optimal problem with clear and effective management process.


I. INTRODUCTION
Infrared polarization imaging and infrared intensity imaging detect target attributes by polarization and intensity information of infrared ray respectively [1], [2], which is an important part of dual mode detection system in the field of high performance space-aeronautic monitoring, unmanned aircraft remote sensing observation and intelligent vehicle. Fusing the two image types is helpful to effective storage of detection image, as well as synthesis and understanding of features, thus improving imaging quality and detection precision of system [3]- [5]. In the field of image fusion, difference features are quantitative description of dual mode infrared imaging complementary advantages. However, the difference The associate editor coordinating the review of this manuscript and approving it for publication was Sudhakar Radhakrishnan . of imaging mechanism, and the diversity of detection environment and targets make the difference features complex and varied in the same scene between two kinds of images. Fusion performance (fusion validity) of difference features to each algorithm is different, so it is difficult for fixed algorithm to meet the different needs. Only by selecting corresponding fusion algorithm according to difference features, can the algorithm achieve adaptive changes along with the change of difference features. At present, dynamic and optimal adjustment of fusion algorithm based on different attributes of different features has become a key technology and hotspot to satisfy the demands of infrared imaging fusion of complex scenes [6].
Difference feature attributes of images include type, amplitude and frequency and so on. In the infrared polarization and intensity images, the types of difference feature usually contain gray mean, standard deviation, edge intensity and spatial frequency. The amplitudes of difference feature reflect the relative difference magnitude for the brightness, contrast, contour and detail feature. The frequency of difference feature is the number of times of occurrence of the amplitudes, which can represent the sparsity of difference feature amplitude spreading throughout the images. At present, fusion algorithms are chosen independently according to each different feature, the phenomenon of conflict exists among the selected algorithms, leading to failure due to the loss of useful information of fused image.. However, the extant research mainly focused on proposing various fusion algorithms. Thus, this paper can fill a gap in the research on algorithm selection. Actually, fusion algorithm selection involves a complex and long task by evaluating and selecting the optimal one through comparing the alternatives against a series of difference features and their attributes.
The complexity and uncertainty of imaging scenes have caused fusion validity degrees of difference feature to each algorithm are not exact numerical values, that is, the relationship between difference feature and fusion algorithm is uncertain. Currently, possibility distribution and two-tuple linguistic are introduced to describe the uncertainty by using fuzzy numbers and two-tuple linguistic variables instead of exact numerical values, but they just consider the fuzziness, completely ignoring the other side of the uncertainty, namely randomness. Also, the difference feature weights are determined with the experts' preferences. When the uncertainty of weights fails to be fully described, it is difficult to arrive at an acceptable final decision.
According to the above analysis, this paper proposes a setvalued mapping cloud model for fusion algorithm selection integrating many kinds of difference features. The main contributions can be shown as follows: Firstly, fusion validity distributions of difference feature amplitudes are constructed, and a novel fusion validity degree transformation method into two-tuple linguistic cloud variable is proposed through giving consideration to both fuzziness and randomness, in order to build set-valued mapping between difference feature amplitudes and fusion algorithms. Secondly, importance weights of difference features are assessed based on nonparametric estimation for the frequency of each corresponding feature in images. Thirdly, a cloud weighted arithmetic averaging operator is constructed to rank the algorithms. Fourthly, the proposed method combs the fusion algorithm selection process, which contributes to further improving fusion effect.
The remainder of this paper is structured as follows. Section 2 presents a literature review on fusion algorithm selection and uncertainty description. Section 3 introduces the basic theory of cloud model. Section 4 gives the set-valued mapping cloud model under two-tuple linguistic environment. In Section 5, the applicability of the proposed model is demonstrated through the fusion algorithm selection of six alternatives. The results and discussion are shown in section 6. Subsequently, the final conclusion and future research are provided in Section 7.

II. LITERATURE REVIEW
Currently, for dual mode infrared image fusion, the fusion algorithm with better effect has been selected by qualitative analysis of the relationship between some known difference feature type and multiple fusion algorithms, e.g., Lin et al. [7] established the relationship between brightness difference feature and Top-Hat Transform through combination with the Support Value Transform (SVT), which has the advantage of improving the contrast of the fusion images. Gangapure et al. [8] analyzed fusion performance of some multiresolution transform domain methods such as DWT, SWT, CVT, CT, DTCWT, and NSCT, and builds the correspondence of these algorithms and difference features, achieving better fusion effect. Xiang [9] considered the relationship between the statistical difference feature and adaptive dual-channel unit-linking PCNN in NSCT domain, making the fused images with rich details. Meng et al. [10] described the relationship between brightness difference feature and fusion algorithm with saliency map and interest points, making the fused image retain significant bright targets. Liu et al. [11] studied the mapping relation between edge detail and fusion scheme based on compressive sensing for visible and infrared images. Li et al. [12] proposed an image fusion framework which integrates non-subsampled contour transformation (NSCT) into sparse representation (SR), to achieve better performance on structural similarity and detail preservation in fused images. Huang et al. [13] presented a non-subsampled contourlet transform (NSCT) based decomposition method for image fusion, which made the fused image with good performance in clarity, contrast, and image information entropy. A novel image fusion scheme based on image cartoon-texture decomposition and sparse representation was proposed, which had great superiority in information preservation and visualization for fused image [14]. These algorithms tend to have well fusion effect for specific situations.
The above researches employ the crisp value to express the fusion effect of difference features. The application of fuzzy set is recommended to overcome the situations where crisp value fails to the uncertain relationship between difference features and fusion algorithms. An improved fuzzy set was used as the low-frequency fusion rule for infrared and visible image fusion [15]. Also, fuzzy logic inference was employed to deal with multi-band images synchronous fusion according to basic human judgments, and membership functions [16]. T. Tirupal proposed multimodal medical image fusion based on Yager's intuitionistic fuzzy sets [17]. Images were initially converted into Yager's intuitionistic fuzzy complement images and a new objective function called intuitionistic fuzzy entropy is employed to obtain the optimum value of the parameter in membership and non-membership functions.
Seen from the above review, fuzzy set theory is common used to solve image fusion problems instead of crisp values by combing with some multiresolution transform domain methods. To some degree, fuzzy numbers contribute to reducing the uncertainty in image fusion. However, when describing the linguistic fusion effect, fuzzy numbers will have many limitations. To solve the problem, some researchers have employed two-tuple linguistic variables and possibility distributions to model and manage uncertainty, which has implied the accomplishment of processes of computing with words in the field of group decision making [18]- [20]. For image fusion, we can take the advantage of two-tuple linguistic variables and possibility distributions to dealing with the linguistic fusion effect, which can reduce the information loss compared with fuzzy set and crisp values. However, the twotuple linguistic does not take the randomness into consideration, either. Cloud model, considering both fuzziness and randomness, can overcome that defect.
This paper builds set-valued mapping cloud model between difference feature amplitudes and fusion algorithms. For multiple difference features, fusion algorithm selection can be regarded as multi-criteria group decision making problem, and a cloud weighted arithmetic averaging operator is constructed to rank the algorithms with aggregation of set-valued mapping cloud mode in two-tuple linguistic setting.

III. RESEARCH METHOD OF CLOUD MODEL
Li et al. [21] first put forward cloud theory to give consideration to the randomness and fuzziness of information, which can greatly realize the bidirectional cognitive transformation between qualitative concept and quantitative data [21]. Therefore, it has given serious attention of the researchers.
On the one hand, many researchers have carried out more in-depth studies theoretically. Some new concepts and definitions emerged, such as Wang et al. [22] proposed a 2nd-order generic normal cloud mode to establish a relationship between normal cloud and normal distribution, and presented the 2nd-order generic forward normal cloud transformation algorithm [22]. J.Q. Wang et al. [23] proposed an uncertain linguistic method based on a cloud model in order to solve multi-criteria group decision making problems [23]. The above research results expanded the connotation and extension of cloud model. On the other hand, the cloud theory has been widely used to solve MCDM problems in various fields. D.  proposed a cloud model-based assessment approach for water quality assessment, which was reasonably accurate and representative of other methods [24]; Wu et al. [25] proposed an extended VIKOR method with cloud model for supplier selection in nuclear power industry [25]. Multi-domain applications show the reality needs for MCDM and good adaptive capacity of cloud model. Y.N. Wu et al. proposed a cloud decision framework in pure 2-tuple linguistic setting to solve low-speed wind farm site selection problem [26]. C.J. Lin et al. integrated the variable weight theory and cloud model theory to construct the VW&ICM calculation model to evaluate the risk of construction of karst tunnels [27]. In conclusion, the researches mentioned above proved the effectiveness and correctness of cloud theory from the aspects of theory and practice.
The cloud model can describe the overall quantitative property of a concept by three numerical characteristics, namely, Expectation (Ex), Entropy (En), and Hyper entropy (He). Then the intermediate normal cloud can be expressed as Y 0 (Ex 0 , En 0 , He 0 ) and the adjacent normal clouds around Y 0 (Ex 0 , En 0 , He 0 ) are respectively expressed as follows: Assume that there are two clouds A 1 (Ex 1 , En 1 , He 1 ) and A 2 (Ex 2 , En 2 , He 2 ), some operations between cloud A 1 and cloud A 2 can be defined as follows (Y.N. Wu et al.) [28]:

IV. SET-VALUED MAPPING CLOUD MODEL UNDER TWO-TUPLE LINGUISTIC ENVIRONMENT
The two-tuple linguistic proposed by Herrera and Martínez can avoid effectively information distortion and loss [29]. Therefore, the 2-tuple linguistic model has been extensively studied and applied to various fields [30] [31]. Considering the advantage of cloud model and two-tuple linguistic, this paper proposes set-valued mapping cloud model based on possibility theory to solve fusion algorithm selection of dual mode infrared images. Fig. 1 shows the flowchart of the method and the detailed steps.

A. CONSTRUCTING FUSION VALIDITY DISTRIBUTIONS OF DIFFERENCE FEATURE AMPLITUDES OF IMAGES
With regard to actual detection image, fusion validity used to describe the performance of fused image, and it has been usually predicted and estimated by fusion result of the existing, limited and similar scene image, that is, the measurement of fusion validity is possible and predictive. Cosine similarity pays more attention to the difference in direction rather than just distance and length, it has good spatial rotation invariance [32]. So fusion validity distribution based on cosine similarity is constructed to reflect the dynamic changes of fusion performance for difference feature amplitudes. Let {Q 1 , Q 2 , . . . , Q z } and { A 1 , A 2 , . . . , A N } be a difference feature set and a fusion algorithm set of the two types of image. Q P i and Q I i are the amplitudes of difference feature Q i of the infrared polarization and intensity images respectively. Q F ij refers to the amplitude of difference feature Q i of fused image with fusion algorithm A j . Firstly, the source images and fused image are divided into block by sliding a window with 16 × 16 pixels, and difference feature amplitudes of images are calculated. Secondly, for each block under the j th fusion algorithm, we can obtain fusion validity by the following function: Thus, fusion validity scatterplot of the ith difference feature amplitude is obtained. Thirdly, the amplitude ranges of the ith difference feature are computed in the source images and fused image. The ranges are equally divided into some amplitude intervals. Finally, we count the scatter of each amplitude interval and computer their average to construct fusion validity distributions of difference feature amplitudes.

B. TRANSFERRING THE FUSION VALIDITY DEGREE INTO TWO-TUPLE LINGUISTIC CLOUD VARIABLE
Let S = {s 1 , . . . , s f } be a linguistic term set and V ∈ [0, f ]a value representing the result of a symbolic aggregation operation, then the two-tuple that expresses the equivalent information to V is obtained with the following function: Assume that U = [V min , V max ] is a fusion validity universe. Then the corresponding two-tuple linguistic cloud variables based on golden section method are shown as follows: For Ex,  [33]. Assume that there are M amplitudes of difference feature in images, which make up difference feature amplitude sample set {Q M }. First of all, we need to expand the sample set to meet the requirements of nonparametric probability density estimation. Let moving step length of difference feature amplitude be x, new sample set {Q N } is obtained by interpolation extension, and amplitude sample number can be calculated by the following equation.
where Q NL and Q NR refer to the left and right margins of {Q N }. Then, random sampling point Q m i from sample set {Q N }, probability density estimation value of Q m i can be obtained by using (7).
where k N = √ N , and Q m N < Q NR + x/2. Thirdly, for each difference feature amplitude interval, difference feature frequency can be obtained by the complex trapezoidal integral, as shown in (8).
where h = (Q k R − Q k L )/n, w = 1, . . . , n − 1. Thus, difference feature frequency distribution can be obtained. Finally, we can determine the important weights by the following equation. (9) where ω(Q m i ) is the weight of difference feature Q m i . We can transform the weights into the cloud variables by using (3) - (5). Then the cloud variable of the ith difference feature weight under the jth fusion algorithm Y ω ij can be obtained.

D. AGGREGATING ALL DIFFERENCE FEATURES AND RANKING THE FUSION ALGORITHMS
In order to rank the fusion algorithms, all information of difference feature should be aggregated for the final results. The cloud weighted arithmetic aggregating (CWAA) operator is shown as follows: where W i is the weights of the ith difference feature, and 1, 2, . . . , n).

V. AN EMPIRICAL STUDY A. TYPICAL INFRARED IMAGING SCENES
Infrared scenes for imaging and detection generally include humans, vegetation, artificial objects, etc. Therefore, the infrared intensity and polarization images of six typical scenes in Fig. 2 [34]- [40] are selected as research scenes: It can be seen from Fig. 2 that the infrared polarization image based on the polarization information has significant edge details and other features but lacks sufficient brightness features. The infrared intensity images based on thermal radiation information have significant brightness features but lack sufficient edge and detail features. Therefore, the differences in the brightness, edge and detail features between the infrared polarization and intensity image are obvious. The gray mean value of the image reflects the brightness information of image, the edge intensity reflects the edge information, and the spatial frequency reflects the detail information. Therefore, the difference feature selected in this paper includes gray mean, edge intensity, gray standard deviation and spatial frequency, labeled as Q 1 , Q 2 , Q 3 , and Q 4 . In addition, six typical and commonly used multi-scale transform fusion algorithms are selected as the alternatives, labeled as A 1 , A 2 , A 3 , A 4 , A 5, and A 6 . All experiments are conducted on a desktop with the 3.3 GHz Intel Core CPU, 8 GB memory and MATLAB codes.

B. PROCESSING AND ANALYZING THE SOURCE INFORMATION
This paper took the first group image as an example for selecting the more reasonable algorithm. The four difference feature amplitude distributions can be obtained, as shown in Fig. 3.
Because the page size is limited, we only choose fusion algorithm A 1 to analyze. Fusion validity scatterplot of the 54342 VOLUME 9, 2021  four difference features with algorithm A 1 can be obtained using (1), shown in Fig. 4. The amplitude ranges of each difference feature are equally divided into 20 amplitude intervals. Then the fusion validity degrees for 20 amplitude intervals are transformed into the two-tuple linguistic cloud variables by using (2)-(5). The two-tuple linguistic cloud variables for 20 amplitude intervals of difference feature Q 1 , shown in Table 1. The two-tuple linguistic cloud variables for 20 amplitude intervals of difference feature Q 2 , Q 3 and Q 4 are included in the Appendix.

C. WEIGHT COMPUTATION AND ANALYSIS ABOUT DIFFERENCE FEATURE
Using (7)-(9), difference feature weight functions can be obtained, shown in Fig. 5. Also the weights of difference features can be transformed into the cloud variables by using (3)-(5). The weight cloud variables for 20 amplitude intervals of difference feature Q 1 are shown in Table 2.

D. RANKING THE SIX FUSION ALGORITHMS AND MAKING DECISION
The CWAA operator is used to aggregate all the 20 amplitude intervals of each difference feature. Aggregate results are shown in Table 3. Table 3 revealed that the best fusion algorithm is A 5 . Besides, the order of alternatives can be determined, namely, To prove the effectiveness and advantages of the proposed method in this research, a comparison analysis is conducted with two existing extended 2-tuple linguistic aggregation operators: the extended 2-tuple weighted average operator (WA) and the extended 2-tuple weighted geometric operator (WG). The two operators are shown as follows VOLUME 9, 2021  respectively: (p, α) = WA C ((p 1 , α 1 ), (p 2 , α 2 ), . . . , (p n , α n )) The difference feature information and weight in the aforementioned case are utilized to rank the fusion algorithms. With the two operators, the following results can be   Thus, we can gain the order of alternatives with the WA operator, namely, A 5 > A 2 > A 4 > A 6 > A 3 > A 1 ; the order of alternatives with the WG operator can be determined, namely, The ranking orders are slightly different from that obtained by the proposed method in this research, but the best algorithm is the same by all methods, that is A 5 . It shows that the method presented in this paper is effective.
To further illustrate the correctness of the ranking order, we quantitatively evaluate the fusion results using Q AB/F , standard deviation (STD), gray mean value(M ), information entropy (IE) and spatial frequency (SF) [41]. Q AB/F represents the amount of transferred information from each of the input images into the fused image. The larger Q AB/F the value indicates more similar fused image when compared with the original image. STD is used to measure the richness of an image's information, and the bigger value indicates the richer image information. M of an image describes the brightness; the bigger value indicates the stronger brightness. The larger IE, the larger the amount of information of the fused image, and the richer the information contained in the fused image, the better the fusion quality. SF reflects the degree of clarity of an image; the bigger value indicates the better clarity. Evaluation results of six fusion algorithms are shown in Table 4. In the Table 4, the best results of each metric are highlighted in bold.
From Table 4 and Fig. 6, we can conclude that A 5 is optimal in evaluation index such as Q AB/F , STD, IE and SF. So compared with the other fusion algorithms, algorithm A 5 selected by our method has a certain advantage in fusion quality. With regard to the suboptimal algorithm A 6 , the M value is higher than the others, and the IE and STD values are second highest, and the Q AB/F value is higher than that of A 2 and A 4 . So it is reasonable that A 6 is the suboptimal algorithm. The transformation proposed in this paper considers not only the average levels Ex of evaluation information,   but also the fluctuation and stability which are depicted by En and He, which makes a contribution to the authenticity of the decision results. In conclusion, the final result obtained by the proposed method, A 5 > A 6 > A 4 > A 2 > A 3 > A 1 , is reliable.So we can utilize the proposed set-valued mapping cloud model for selecting the optimal fusion algorithm in dual mode infrared images.
Besides the correctness and effectiveness, the proposed method also should be robust and stable. In order to address this, we conduct a sensitivity analysis through slightly changing the amplitude in a certain interval of any one of difference features. The whole results of sensitivity analysis are shown in Fig.7. It is noted that test i(i = 1, 2, 3, 4) means changing the amplitude in a certain interval of ith difference feature. According to Fig.7, the final ranking order, changing whichever of the difference features, stubbornly stay the same, namely, A 5 > A 6 > A 4 > A 2 > A 3 > A 1. That is to say, a single difference feature interval cannot decide the final result. Hence, our proposed approaches are relatively stable and robust to select the optimal one from multiple algorithms.

VI. CONCLUSION
This paper investigate the two-tuple linguistic cloud model for possibility distribution to solve the fusion algorithm selection problem. Compared with existing researchers, this study has the following advantages: (i) Fusion validity distributions of difference feature amplitudes are constructed to fully describe the uncertainty of information through giving consideration to both fuzziness and randomness; (ii) A novel fusion validity degree transformation method into two-tuple linguistic cloud variable is able to build set-valued mapping between difference feature amplitudes and fusion algorithms; (iii) We can select the best fusion algorithm according to many difference features of images. Moreover, we can solve the other comprehensive and multi-attribute optimal problem with this method.
In the course of approaching the solution, future studies are drawn up with consideration of the following issues: a. In this paper, four difference features are utilized to select the best one of six fusion algorithms. In further research, we should take more recent published solutions about image fusion into consideration so as to choose the algorithm with better fusion effect.
b. Although the proposed method in this paper has great advantage in handling MCDM problem, there is still some room for improvement. It would be very interesting to extend our research to the case in more sophisticated situation, such as introducing two-dimensional and even high-dimensional cloud variables.
c. At present, our methodology cannot be applied to Pythagorean fuzzy framework directly, In further research, images can be converted into Yager's intuitionistic fuzzy complement images, thus we can extend the proposed methods to Pythagorean fuzzy uncertain environments [42], [43]. We leave this to further study.
LINNA JI received the Ph.D. degree in signal and information processing from the North University of China, Taiyuan, China, in 2015. She hosted one item of the National Natural Science Foundation and has been engaged in some national and provincial projects. Her current research interests include infrared multi-source image fusion, uncertain information processing, and performance detection and evaluation of complex systems. XIAOMING GUO was born in Shanxi, China, in 1996. She received the B.S. degree from the North University of China, Shanxi, in 2019, where she is currently pursuing the master's degree. Her research interests include image fusion and infrared information processing. VOLUME 9, 2021