MSEF-ImgSeg: An Intelligent Algorithm for Multi Scale Exposure Fusion Using Image Segmentation and GGIF

Multi-scale exposure fusion is a powerful approach to fuse variously low dynamic range images into a high quality image. Fine and attractive information of source images is added in the resultant image. It may yield better fusion result than the images fused by single scale exposure fusion. In multi-scale exposure fusion possibility of halo artifact can be produced and the details in darkest and brightest areas are normally not retained in the final fused image. Based on image segmentation and edge preserving filtration technique a novel algorithm is proposed in this paper for multi-scale exposure images. Taking advantage of super pixel (image segmentation) along with edge preserving technique, details in the darkest and brightest regions are well protected which removes the halo artifacts produced in the fused image. The outcome of experiments proved that proposed algorithm provides better results than other state of the art algorithms used for image fusion.


I. INTRODUCTION
Technology has evolved extremely in all fields of life. Image fusion is one of the digital image processing techniques that helps to produce more sharp and detailed image of a specific scene. The luminance of a natural scene generally, contains a high dynamic image ranges which are not captured with conventional image devices. These limitations can be handled by taking numerous low dynamic range (LDR) images of same scenery captured at different level of exposure [1]. Images are captured in different exposure settings i.e. underexposed, overexposed and in normal exposure condition. By using camera response function, these LDR images are converted into an HDR image. High dynamic range (HDR) is The associate editor coordinating the review of this manuscript and approving it for publication was Senthil Kumar. a technique for image enhancement to create an image similar to human perception [2].
Exposure fusion is an alternative approach which directly fuses LDR images into high quality images. Merten et al. [3] introduced an algorithm for image fusion that calculates three quality measures: Contrast, Saturation and Well-exposedness. Contrast is measured by laplacian filter and saturation is measured by standard deviation. Laplacian pyramid is utilized to decompose input images and gaussian pyramid is used for smoothing the weight map. Heo et al. [7] proposed a technique for image enhancement. The patch match technique is used to find consistency between long and short exposures to pick a suitable color feature that can join the non-local structure. Entropy based filtering algorithm is utilize to exclude the structural element of an image. Shen et al. [4] introduces a new algorithm of exposure fusion. Proposed approach calculates the optimal balance, color consistency and local contrast. Fused image with fine details are produced by this algorithm. In [5] author proposed a technique that is dependent on image patch colors. These patches are decomposed into three components: mean frequency, signal strength and framework. These components are processed independently and finally, a fused image is formulated.
In [12] author proposed weighted aggregation straightly called WAGIF to sharpen the image edges and reduce halo artifacts. Dependent upon the patch weights the estimation of a specific pixel is proposed that utilizes a novel weighted aggregation method to measure weight using mean square errors (MSE). This technique enhances the edges information and reduces halo artifacts to some extent. Ma et al. [31] introduced a robust technique for multi-scale image fusion. Researchers worked on a structural patch wise decomposition approach which is more efficient for ghosting effects. This model decomposes images into three separate segment i.e. signal structure, strength and average intensity. Every component is processed according to the strength of patch, exposedness and consistency of structure.
Multi focus image fusion technique based on image super resolution reconstruction is proposed in [34]. DualCNN is utilized to reconstruct super resolution color images. DualCNN has two levels i.e. deep-level (to capture the contrast of source images) and shallow-level (to capture structural details). Furthermore, the author used a bilateral filter to reduce noise and retain image spatial consistency. In [35] a new max -min filter-based focus assessment operator is proposed. Combining average filter and median filter (MMAM) a max-min filter is used to determine the degree of concentration of the source images. Proposed algorithm can better measure the sharpness of different regions of the image, and selected clean areas will be more use full for visual perceptions of humans or machines.
Existing multi-scale exposure fusion algorithms, can better preserve global contrast, but finer details in brightest / darkest regions of an image are lost if there exist lots of scales. Therefore, it is necessary to choose the appropriate number of scales in order to preserve both global contrast and finer details of image. Gaussian pyramid in [6] failed to capture this because of halo artifact in the resultant image. So, there is dire need to develop a new algorithm for image fusion. Existing techniques may produce images with reduced halo artifacts but there is still a room for research to produce better fused images with no halo artifact [8]- [10].
To the best of our knowledge, MSEF-ImgSeg is the first efficient and new multi-scale exposure image fusion algorithm utilizes benefits of image segmentation along with edge preserving techniques. Proposed algorithm produces no halo artifacts and detail of brightest and darkest region has been well preserved in the final fused image. In order to solve the problem like halo artifacts, proposed technique load numerous images all are captured at different exposures. Firstly, input images are aligned then image segmentation is performed on input images using slic to get more accurate, uniform and compact super pixels [13] of images. After getting super pixels weight map is computed for input images keeping contrast, saturation and well-exposedness as a quality measure. Gradient domain guided image filter (GGIF) [14] is used for smoothing of weight map because edges are well preserved using GGIF and finally fusion is done. Results reveal that the proposed technique is sharp, productive and accurate as compared to previous research. The advantage of using the image segmentation technique and edge preserving pyramid is that the resulting image created without halo artifacts and information is well preserved in the final fused image. Major contribution of this paper is as follows: • Image segmentation based edge preserving smoothing technique.
• A novel algorithm for multi-scale exposure fusion.
• Better fused image with no halo artifacts. The organization of this paper follows the literature review in Section 2; Proposed Methodology is described in Section 3; Results and Discussion are described in Section 4; Finally, Section 5 will present the conclusion and future work.

II. RELATED WORK
The usage of multi-scale image fusion transformation is not the new technique in image fusion. The first multi-scale fusion technique depends on the laplacian pyramid and was proposed by Brut in 1984 [17]. A simple pixel based fusion technique was used. The first step is the construction of gaussian pyramid of all input images, then laplacian pyramid is obtained by difference between consecutive gaussian levels. Laplacian pyramid code is utilized to yield and quantized the laplacian pyramid element. Finally, the reconstructed image is generated by combining the pyramid code. Pu and Ni [18] designed a fusion method that is based on contrast of image using the discrete wavelength transform (DWT). The proposed method includes three steps. The activity measure method is introduced, that is called the directive contrast. Second step is to use the maximum selection rule of association with the wavelength coefficient. Finally, the reconstructed resultant image is obtained by the reverse wavelength transform.
Pohl and Genderen [19] introduce a multi-scale image fusion technique which utilize the pixel value of all input images. Weighted component of every pixel value is enhanced and mean of weighted pixel value is used to construct the final fused image at the same pixel position. The principal component analysis is used to determine the weighted factor. This method reduces the redundancy of image data. Zhang and Cham [20] proposed a multi-scale exposure fusion method that is based on gradient directed arrangement. The resultant image is constructed by combining the exposure images through consistency assessment and mean of visibility. Researcher observed the gradient magnitude is low in over and underexposed region and high in the well-exposed region. This algorithm reduces ghosting artifacts in final fused image through. VOLUME 8, 2020 Multiscale exposure fusion based on illumination estimation is proposed in [21] this approach utilizes illumination computation filtering, to measure the well-exposedness of the pixel in the input images arrangement. Membership functions are used to modulate the illumination estimation result. Furthermore, membership function applies weight to each pixel, according to their degree of well-exposedness. Ke et al. [22] introduce a perceptual multi-scale exposure fusion method that is based on local contrast and complete image quality index. A novel fusion technique is introduced by the researcher that is dependent on three quality attributes i.e. color correction, local saturation and complete image quality basis. Researchers adopted the local saturation and image quality index to achieve natural color reproduction and to boost the fusion property.
Colorful multi-exposure quality assessment based on saturation quality assessment has been proposed by Deng et al. [23]. New quality measurement matrix for colorful multi-exposure images which are dependent on texture, saturation and structure similarities is proposed in this technique. The saturation, structure and texture are calculated as measurement of structure, color and texture information. These similarities are mapped by extreme machine learning.
Multi-Exposure Image Fusion that based on perceptual quality assessment has been proposed by Ma et al. [24]. In this research, researchers handle the issues of perceptual quality measurement of multi-scale exposure fusion images. The proposed method first build the multi-scale exposure fusion database and achieve subjective analysis to calculate the quality of pictures produced by various multi-scale exposure fusion techniques. Second, objective quality measurement of image method whereas multi-scale exposure fusion images constructed at the fundamental of the structural synonymy method. The proposed model measures the luminance compactness at coarser steps and preservation of the local structural framework at the fine scales. Ocampo and Gousseau [25] present a method of non-local exposure fusion that is based on patch wise method, images are captured through different exposure settings, same pixels of image are combined in one patch, luminance domain is used to compare the patches.
Connneh et al. [26] present a method for the fusion of multidimensional images utilizing a contrast mapping pattern in a gradient region. Natural tensor and framework tensor about LDR images could be adopted to create a vector field. The drawback of this algorithm is that noise is present in the final image because structure tensor is directly applied. To overcome this problem weighted structure tensor is presented in [27]. Weighted structure tensor is used to generate vector field. To solve the optimization problem fine information are obtain through vector domain utilizing fast isolated method. Fine details are combined into central image that is added to multi-exposure algorithm to generate fused image. Although, image quality is increased but halo artifacts present in those images that have moving objects. In [28] author proposed an exposure fusion technique for moving objects.
Exposure fusion is affected by ghosting artifacts that are caused by moving objects of a dynamic scene captured. Two category of consistency methods are introduce in this proposed technique. Avoid ghosting artifact and motion detection guidance image is used as a reference image and inter consistency is used to detect the similarity of pixel frequency in various exposure settings. It can avoid the ghost artifact but cannot preserve the detail of the images. Guided image filter (GIF) based expose fusion algorithm has been proposed in [29]. Result shows a halo artifacts still exists in a fused image after capturing moving objects details. This concludes that using edge preserving techniques alone to design an algorithm with no halo artifacts in images is a difficult task. Therefore, there exists a research gap to design a new algorithm which replaces existing Gaussian pyramid for multi-scale exposure fusion.
In this paper we propose a novel algorithm termed as MSEF-ImgSeg to produced better fused image result in which no halo artifacts is produce in the final fused image and brightest and darkest detail in bright and dark regions of an image has been well preserved. Firstly, weight map is decomposed in gaussian pyramid and segmented input images are decompose in laplacian pyramid. Gradient domain guided image filter is applied on gaussian pyramid of weight map and the guided image. The resultant image is obtained by reconstructed laplacian filter. Experimental results reveal that proposed algorithm outperforms other state of the art techniques.

III. PROPOSED METHODOLOGY
An efficient and new multi-scale exposure image fusion technique is proposed that produce a resultant image more informative and sharp with no halo artifacts. This research has utilized image segmentation [13] on input images. SLIC is used for image segmentation because it provides more compact, sharp images with low computational overhead and chances to select a noisy pixel is very low. Proposed technique used 200 segments and compactness control variable j=10 is used for getting desired super pixels. After segmentation weight map is computed [11] using contrast, saturation and well-exposedness. Edge preserving filtering (GGIF) is applied on weight map because of its unique property to maintain edges details. Finally, fusion is done to get resultant image with no halo artifacts. In proposed algorithm we compute the quality measure through region wise, three quality measures contrast, saturation and well-exposedness is utilized. The product of these quality measure is normalized. In order to solve the problem like halo artifacts and preserve the detail in brightest and darkest region. Firstly, weight map is decomposed in gaussian pyramid and input images are decomposed in laplacian pyramid. Gradient domain guided image filter [14] is applied on gaussian pyramid of weight map and the guided image. Resultant image is obtained by reconstruct laplacian filter. Results reveal that the proposed technique is sharp, productive and accurate as compared to previous research. Benefit of using edge preserving  pyramid and image segmentation is that details are better preserved in brightest and darkest regions with no halo artifacts. A. IMAGE SEGMENTATION USING SLIC Super pixels are increasingly used in image processing application [13]. By using SLIC accurately uniform and compact super pixel can be generated through clusters pixels having five-dimensional [labxy] colors and image plane space, where l, a, b are values of color space and x, y is pixel coordinate. These super pixels are extensively referred as perceptually uniform short color distances. In this approach, distance is measure according to the size of super pixel. The mentioned approach is implemented to similarity of color along with pixel adjacency in the 5D space so that there is an approximate equality between conventional cluster sizes and their spatial extent. Fig 2 shows the segmentation of input images into super pixel. The super pixels are uniform, compact in size, and well exposed to region boundaries. VOLUME 8, 2020

1) DISTANCE MEASURE
Desired number of equally sized Q super pixel is taken as input among K pixels. So the size of every super pixel is approximately K /Q pixels. Two equally sized super pixels determined a super pixel center at grid interval S √ K /Q.

This technique chooses K super pixel cluster centers
In this algorithm distance is measured through following equation: D x distance is described through the sum of d lab distance and xy plane distance, normalize by the grid interval S.
To control the density of the super pixels variable j is imported in D x . The higher the value of j the higher the spatial proximity and the more compact cluster. This value of the distance is in the range [1,20]. In this work j=10 has been implemented. Sampling of Q cluster centers that are regularly divided and transformed to a seed location corresponds to lowest gradient point in a neighborhood. This algorithm minimizes the chance of selecting a noisy pixel. The pixel image, combined with lap search area and closest cluster center above this pixel. After all the pixels are combined with the nearest cluster center, a new center is calculated as the mean value lab xy vector of all pixels associated with the cluster. Reiteratively, the procedure of adding pixels with the closest cluster center and recalculate the cluster center up to merging.

B. FORMATION OF WEIGHT MAPS
Provided the set of variously exposed I q images, where q is the number of images. The image set consists of a flat and colorless area because of over and underexposure. All these area that are in over and underexposure have less weight and wellexposed region contains bright color give high weight. There are three Quality measures used in this proposed technique i.e. Contrast, Saturation and Well-exposedness are utilized to measure the weight maps.

1) CONTRAST
In this proposed technique contrast is measure pixel wise. Firstly, RGB images are converted into grey scale images. Contrast is obtained by applying laplacian filter k = [0 1 0; 1 −4 1; 0 1 0] to these grey scale images. It contributes to give significant element a high weight for example edges and texture.

2) SATURATION
The saturation of the color depicts the intensity of the color in images which would be effective in the final fused image to achieve well exposed pixels. While images are taking through long exposure setting then the color details present in the brightest area eventually be less saturated. The desaturated images have dim colors and looks washed out. The proposed approach will avoid desaturated pixels by computing weight map using saturation measure. In this proposed technique saturation is measure region wise. Saturation is calculated through standard deviation within RGB channel. In this proposed technique mean value of R, G, B channel of each super pixel is calculated, as standard deviation is calculated of each super pixel. Standard deviation gives the same value for all pixels in one super pixel.

3) WELL-EXPOSEDNESS
Well-exposedness W e is also measured region wise. Looking at just raw intensities in a channel reveals exactly how good the pixel is exposed. There is a need to retain intensities that are not zero near to underexposed and not one near to overexposed. So apply gaussian curve to calculating well-exposedness. Every intensity of i is weighted according to the value ≈ 0.5 using gauss curve, if the intensity value is near to 0.5 then pixel is said to be well exposed. Sigma is used to control the quality of image. In this research default value σ = 2 is set. Gauss curve is applied to each super pixel of R, G and B channels to compute well-exposedness of these channels. This value achieves good result for most of the cases. The product of three quality measures Contrast, Saturation and Well-exposedness is denoted byW np in equation 4 where, W c , W s , W e represents contrast, saturation and well-exposedness. For the fusion process, the weight map has been normalized in equation 5 Results of proposed weight map are better than previous methods. More informative detail in brightest and darkest regions in over and underexposed regions are well maintained using the proposed algorithm.

C. EDGE PRESERVING SMOOTHING FILTER
Edge preserving smoothing filters is a technique that smooths texture of the image while maintaining sharp edges. There are many edge preserving filters i.e guided, median, bilateral and gradient domain guided image filters [14]- [16]. In this proposed algorithm gradient domain guided image filter is used. One of the examples of local filter is gradient guided image filter. Edges are well preserved through GGIF and its also used for image enhancement, image saliency detection, image matting and dehazing. Existing algorithm used filters that preserve the edges like guided image filter and bilateral filter but these filters cannot preserve the detail near the edges and produce halo artifacts in the resultant image.
GGIF can preserve better detail near the edges. In this algorithm GGIF is used for smoothing of weight map. Guidance image itself can be used as an input image and it is being chosen as luminance component of the various exposed input images. In this approach luminance component is calculated through g = 0.299 * I (:, :, 1, :) + 0.587 * I (:, :, 2, :) + 0.114 * I (:, :, 3, :);.g is used for guided image. The proposed weight map of the algorithm consists of two parts, base layer and detail layer.
where, W b(p) is a base layer that is built by homogeneous region. W d(p) is a detail layer that is built by fine details. ω ζ (p) be a square window of a radius ζ centered at the pixel p. W B(P) is supposed to be a linear transform of the luminance component g (guided image). W n,B(p) = a n,p * g n + b n,p where, a and b are constant used to minimize the following cost function given in equation 8 ρ ωζ (P ) = [(a n(p) * g n + b n,p − W n(P) ) 2 +λ(a n,p − γ p ) 2 /τ g,n(p ) ] (8) where, λ is regularization parameter. τ g,n(p ) and γ p are two edge aware weighting dependents on uniform two-scale neighborhood variance. The advantage of GGIF is that edges are preserved and produces no halo artifacts in a fused image. In addition, gradient domain guided image filter is fast and have low computational complexity.

D. FUSION
Once the edge preserving pyramid E{W } t n (p) using weight map and laplacian pyramid L{I } t n (p) are formulated through input images. The resultant fused image R is obtained through Laplacian pyramid given in equation 9.

L{R} t
The resulting image is created through the reconstruction of laplacian pyramid L{R}. Laplacian pyramid does not drop any information, therefore, the original image can be reconstructed. Reconstructed laplacian pyramid is attained by repeating interpolation of the base layer and detail layer. Algorithm 1 describes the whole steps of proposed technique.

A. EXPERIMENTATION
For experimentation, the used specifications include core i3 processing unit with 8 GB RAM. MATLAB 2017 is used for image processing. Experimentation is performed on 10 differently exposed images dataset. The proposed algorithm is initialized by using three or more than three multi-exposure images. It read input images, all images are capture at different exposure setting. Image segmentation is applied on source images to capture super pixel from images. Slic is used for generating super pixels. After applying the image segmentation, weight map is calculated using three quality measures i.e. contrast, saturation and well-exposedness of each super pixel. Weight map is utilized to preserve better details in brighter and darker regions of images. Gaussian pyramid is used for smoothing of weight map. Furthermore, gradient guided image filter is applied to produce resultant image that is more informative, sharp, preserve detail in darkest and brightest regions and avoid halo artifacts.
Subjective and objective analysis of proposed technique with already existing techniques were performed and it was observed that the resultant image produces by the proposed algorithm generates more informative and sharp image as compared to resultant images produced by the existing techniques. Experimental results depict that proposed algorithms produced better images than other state of art algorithms.

B. RESULTS
An improved multi-scale exposure fusion algorithm is proposed, which takes multiple input images that belong to the same scene but all of them are captured at different exposure setting. Image segmentation and edge-preserving technique is then used to get final fused image. In this research proposed algorithm is compared with three state of art algorithms, Li et al. [30], Ma et al. [31], and Kou et al. [32]. GIF is proposed in [30] that is basically a two scale exposure fusion algorithm based on edge smoothing techniques. This algorithms can better preserve saturation details and global contrast but produces halo artifacts in final fused image. Algorithm proposed in [31] is single-scale exposure fusion algorithm that reduces halo artifacts but cannot preserve global contrast of images. Technique proposed in [32] minimizes halo artifacts. MSEF-Imgseg technique is different from above mentioned techniques because it captures more compact super pixel from source images, generate weight map by using these super pixel keeping contrast, saturation and well-exposedness as quality measure. Laplacian pyramid  In Fig 3 (a,b,c) are source images, (d) shows the result of Li et al. [30], (e) result of Kou et al. [32],(f) shows thes Ma et al. [31] result, and (g) represent improved algorithm result.  is applied on the input images and guassian pyramid is applied on weight map. Finally, laplacian pyramid is reconstructed and fused image is generated. The resulting MSEF-ImgSeg image is more concise, sharp and preserves edges information that do not have halo objects. Figure 3 (a,b,c) shows the source images. These source images are captured under different exposures. Figure 3 (a) shows the underexposed images, figure3 (b) shows image have normal exposure, figure3 (c) shows overexposed images. Each image has different level of information that help in fusion to produced final image that is more informative. Every input image has different information about the scene. Figure 3 (d) shows the result of Li et al. [30]. Figure 3 (e) shows the result of Kou et al. [32], Figure 3 (f) shows result of Ma et al. [31] and figure 3 (g) shows the result of proposed algorithm. Subjective assessment represents that the proposed algorithm outperforms then other state of art algorithms. It can be seen that the information in the darkest and brightest regions is well preserved in proposed algorithm. Objective analysis also shows the result of current research is good in quality than previous algorithms. Figure 4 (a) are overexposed images that contain information about bottom of the lamp. Figure 4 (b) is an image which is underexposed images and give information about the flame present in the scene . Fig 4 (c) represents normally exposed image and contains information like lamp, table, cup, chair. Objective analysis also proves the result of proposed algorithm is best than the existing algorithm. Figure 5 (a) is an overexposed image that contains information about rocks and inside the cave. Figure 5 (b) is normally exposed image that contains information about the snow but not clear about the inside scene of cave. Figure 5 (c) is an underexposed image that presents information outside of the scene. Figure 5 (d) shows the result of Li et al. [30], Figure 5 (e) shows the result of Kou et al. [32], Figure 5 (f) shows the result of Ma et al. [31] and figure 5 (g) demonstrates the result of proposed algorithm. The proposed algorithm produce image that is more sharp, informative and preserve details in the brightest and darkest region. Using subjective analysis its shows that the proposed algorithm produces outstanding results as compared to previous technique. Figure 6 (a,b,c) are multi-scale input images that all have different levels of information. Figure 6 (a) has information about clouds and house, but very little information about grass so this image is underexposed. Figure 6 (b) has information about grass and house and figure 6 (c) is overexposed image contain information about house but very little information about clouds. Figure 6 (d) is the result of Li et al. [30]. Figure 6 (e) is the result of Kou et al. [32] and figure 6 (f) is the result of Ma et al. [31] algorithm. Figure 6 (g) shows the result of the current algorithm it can be seen in figure 6 (g) image is more informative, preserve detail in the brightest and darkest region and no halo artifact is produced, other algorithms produced halo artifact in final images. Figure 7 (a,b,c) are source images that contains different level of information. Figure 7 (a) has normal exposure, figure 7 (b) is overexposed image contain information of sky 166066 VOLUME 8, 2020 FIGURE 6. In Fig (a,b,c) are source images, (d) result of Li et al. [30], (e) result of Kou et al. [32], (f) represent the result of Ma et al. [31], (g) result of improved algorithm. In Fig(a,b,c)    show the result of proposed algorithm. Subjective and quantitative assessment shows that the proposed algorithm produces better result than existing techniques. Brightest and darkest detail is better preserve in figure 7 (g), no halo artifacts is produced, image is sharper. Objective analysis also shows the result of current algorithm is good in quality than state of art algorithms. Figure 8 (a) image is underexposed that gives information about the head of the lamp. There is no information about paper and books that present in the scene. Figure 8 (b) image is normally exposed image and gives information about paper, lamp and some things on the table. Figure 8 (c) image is overexposed that contains information about the books.    [31] and figure 9 (g) shows the result of proposed algorithm. It can be seen that the proposed algorithm has better results than existing techniques. Figure 10 (a,b,c) are multi-scale input images that all contains different level of information. Figure 10 (a) contain information about clouds and house, but very little information about rocks and this image is underexposed. Figure 10 (b) has information about rocks and house. Figure 10 (c) is overexposed image contains house information but very little information about clouds. Figure 10 (d) is the result of Li et al. [30], figure 10 (e) is the result of Kou et al. [32] and figure 10 (f) is the result of Ma et al. [31] algorithm. Figure 10 (g) shows the result of the current algorithm resultant image is more informative, preserve detail in brightest and darkest region and no halo artifacts in VOLUME 8, 2020 FIGURE 10. In Fig (a,b,c) are source images, (d) result of Li et al. [30],(e) result of Kou et al. [32], (f) represent the result of Ma et al. [31], (g) result of improved algorithm.  produced, other algorithms produced halo artifacts in fused images. Figure 11 (a,b,c) are source images that all contain different levels of information. Figure 11 (d,e,f,g) are fused images of Li et al. [30], Ma et al. [31], Kou et al. [32] and proposed algorithm respectively. Subjective analysis shows that result of proposed algorithm is better than existing algorithms. Figure 12 (a,b,c,) are the source images. Figure 12 (a) is the example of underexposed images we can see that this image is very dark and cannot capture the information of books and chairs. Figure 12 (b) is the example of normal exposed image, very well information is capture in this source image. Figure 12 (c) is overexposed image, in this image information of book rank is very well capture, but the area that present in outside of the image is not well clear. Figure 12 (d,e,f) shows the result of Li et al. [30], Kou et al. [32] and Ma et al. [31] respectively. As we can see that previous techniques result cannot well preserved the details in resultant images. Figure 12 (g) shows the result of proposed technique. Subjective analysis shows that proposed algorithm gives more detail on fused image.

C. QUALITATIVE ANALYSIS
Subjective analysis is important but this is dry and time consuming task. Nowadays different quantitative measures have been proposed to check the quality of images. Quantitative measure is an automatic process which save a lot of time spent during subjective analysis. In this research Structural similarity matrix for multi-scale exposure fusion has been used to measure the quality of images.
SSIM is proposed by Wang et al. [33] and is deemed to be connected with the quality perception of Human Visual System (HVS). Structure similarity index matrix compares pixel intensity that has been normalized by contrast and luminance. Let i be the input image and j be the test image then SSIM can be defined as: where,  as a source images, then extract more informative parts of the images and combine them according to the algorithm. Resultant images are more informative, and details are well preserved with no halo artifacts. Many algorithms are proposed for these issues but failed to produce zero halo artifacts in resultant images. Both subjective and objective analysis has been done on the images data set. Results showed that the proposed technique is better than the existing algorithms.

V. CONCLUSION AND FUTURE WORK
The main aim of this research was to propose a technique that could produce sharper, more informative, preserve detail in the brightest and darkest region, when multi-exposure images are given as input, and no halo artifacts are produced in the final merged image. Existing algorithm produced resultant image that was unable to preserve detail in the brightest and darkest region and had halo artifact. Many existing algorithms produce good fusion results but these algorithms are time consuming, complex, costly and some require more powerful devices. New multi-scale exposure image fusion algorithm which utilizes benefits of image segmentation along with edge preserving techniques is proposed to take multiple images of the scene but all the images that are taken should have different exposure, some input images are capture in normal exposure, some are in underexposure and some images are capture in overexposure. All images contain different levels of information. Proposed technique partitioning the image into multiple segments called super pixels, then calculated the weight maps using three quality measure i.e. contrast, saturation, and wellexposedness, gaussian pyramid is utilized to break down the weight map, Laplacian pyramid is utilized to decompose input images, apply gradient guided image filter to find the resulting image, which preserves detail in the brightest and darkest region, is more informative, sharp and produces no fused image halo artifacts. Subjective and quantitative analysis were performed and the results show that the produce algorithm performs better than existing techniques presently used for image fusion. Structural similarity index matrix is used for quantitative analysis to check the quality of images.
Results of quantitative analysis shows that the proposed technique is superior to previous techniques. The proposed technique can generate good quality result of fused image.
Although the proposed technique produces state of the art results but briefly reiterated the facts that there may be room for more improvement so for further research following are some of the recommended directions. To apply the proposed technique on infrared and thermal images, these types of images are more challenging for fusion process. Proposed algorithm computes weight map using three quality measures i.e. contrast, saturation and, well-exposedness and preserve better detail in over and underexposed regions. Researchers can explore any other way of formatting weight map like change the quality measure and check the interesting results.
HIRA KANWAL received the master's degree in computer software engineering from the National University of Science and Technology, Pakistan. She is currently a Lecturer with the Khwaja Fareed University of Engineering and Information Technology (KFUEIT). Her current research interests include machine learning, recommender systems, data science, and data mining.
MARYAM AKHTAR received the master's degree in computer software engineering from the National University of Science and Technology, Pakistan. Her current research interests include machine learning, mobile application development, and data mining.
MUHAMMAD ASSAM received the B.Sc. degree in computer software engineering from the University of Engineering and Technology Peshawar, Pakistan, in 2011, and the M.Sc. degree in software engineering from the University of Engineering and Technology Taxila, Pakistan, in 2018. He is currently pursuing the Ph.D. degree in computer science and technology with Zhejiang University, China. He has been a Lecturer (on study leave) with the Department of Software Engineering, University of Science and Technology Bannu, Pakistan, since November 2011. His research interests include brain-machine interface, medical image processing, machine/deep learning, the Internet of Things (IoT), and computer vision.
KHIZRA KHALID received the bachelor's degree in computer science from the Khwaja Fareed University of Engineering and Information Technology (KFUEIT). Her recent research interests include machine learning, mobile application development, and data mining.