Fast and Efficient Visibility Restoration Technique for Single Image Dehazing and Defogging

Poor weather conditions are detrimental to transportation systems and increase the likelihood of road accidents. Haze and fog are the most responsible atmospheric parameters affecting visibility and hence the traffic performance. The intelligent driving assistance systems developed for automatic vehicles use clear vision for various smart applications like keeping within the correct lane and recognize traffic signs. Bad weather decreases the visibility significantly based on the intensity of fog and haze. So, there is a need for the restoration of clear visibility. This paper introduces a novel visibility restoration approach based on thresholding and gamma transformation method. A step-by-step process for the proposed method is as follows. First, the proper selection of an atmospheric light value is responsible for the validation of the color and contrast of the recovered images. The threshold process makes it possible to estimate atmospheric light. Then, accurately estimating the transmission depth from object to object in small time is the most challenging aspect due to unequal distribution. To solve the problem, the gamma transformation method is used to correctly estimate the depth. Finally, restore the scene radiance from the hazy and foggy images. The experimental results show that the proposed measurement ensures good uniformity about qualitative and quantitative evaluation using eight performance metrics: contrast gain, percentage of saturated pixels, blind contrast assessment, structural similarity index measure (SSIM), image visibility measurement (IVM), mean square error (MSE), visual contrast measure (VCM), and peak signal-to-noise ratio (PSNR). The proposed algorithm excels in previous works in terms of processing time and visibility restoration results comparable to sophisticated state-of-the-art techniques.


I. INTRODUCTION
In transportation study, currently, vision-based driver assistance systems are designed to perform under clear weather conditions. Unfortunately, limited work has been done for poor visibility in the presence of bad atmospheric conditions such as fog and haze. Bad weather conditions distract vehicle drivers from detecting road conditions. These lead to increases the road traffic accidents. To overcome this problem, transportation researchers have been working towards the deweathering of the images. Nearly all vision-based driver assistance systems require images to be of the utmost quality for processing. The degradation in image features because of foggy weather conditions can limit the performance of the The associate editor coordinating the review of this manuscript and approving it for publication was Varuna De Silva . intended applications. Fog is caused by the presence of water droplets, formed from the condensation of water vapor, in the atmosphere. Such a common and natural weather condition can obscure visibility to a great extent, thus limiting the performance of outdoor surveillance systems. Consequently, minimizing the effect of fog from image is essential to increase the efficiency of computer vision algorithms.
A mathematical model, as described in [1], [2] realizes the degrading effect of fog on images as an exponential function of distance from an object to the camera. Hence, additional depth information or estimation is required to execute the process of fog removal. Schechner et al. [3], [4] presented a method that uses more images taken through a polarizer at different orientations to compute scene depth. But, such methods using multiple images for processing, require extra cost to perform effectively. In recent days fog VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ removal algorithms are presented based on strong assumptions or prior knowledge. Tan [5] proposed an appreciable haze removal approach by using the maximization of the contrast technique to improve visibility. Although the method produces appreciable results, the restored image is affected by artifacts near depth discontinuities. In [6], introduced a model based on the assumption as transmission and surface shading is uncorrelated locally. This approach can produce impressive results. However, this approach fails to deal with gray images. Based on the statistic of the substantial number of haze-free images, He et al. [7] presented a dark channel prior method. The haze-free images were obtained by refining the transmission map using the soft matting method. Though, this approach is not valid if the brightness of the scene similar to the atmospheric light. Another kind of solution recommended in [8]- [17], and [18], [19] based on multiscale retinex technique, pixel-based dark channel prior, filtering based approach, prior knowledge, and RGB to HSI color space conversion respectively. In [20], [21], estimated unknown transmission map by employing the boundary constraint and learning procedure with the help of Random Forest [22] for an efficient regularization. Wang et al. [23] presented a method using the physical model and the dark channel prior. In [24] developed a technique based on the second generation of wavelet transforms and the mean vector L2-standard to acquire better visual quality. Ancuti et al. [25] proposed a fusion-based method for dehazing based on a single input image. Huang et al. [26] developed a visibility restoration technique for removing haze from a single image.
Berman et al. [27] utilized the color consistency method [28] observed that the pixels in specific clusters contain non-local nature.
Recently, deep learning approaches have been getting more attention for image deweathering [28], [29]. Effective deweathring can be acquired after numerous training with various samples. For example, convolutional neural networks (CNN) were used to estimate transmission by margins of some image priors. In subsequence, multi-scale CNN (MSCNN) was proposed to determine an effective transmission map. In [30], presented All-in-One Dehazing Network (AOD-Net) to obtain haze-free images. These approaches give impressive results, but ineffectual in the case of dense haze and need various training samples. Hence, learning-wise dehazing techniques require more computation to train samples to lead to additional processing time. To overcome this problem, Ju et al. [31] proposed a gamma correction prior method for image dehazing. This is based on the extraction of the depth ratio of atmospheric scattering. Despite all the aforementioned methods, this method achieves remarkable results. However, the dehazing results depend on manual initialization of the unknown parameters to estimate depth. So, proper depth estimation is essential for accurate visibility restoration. Fig. 1  Contribution The contributions of the work have been threefold. First, a threshold-based methodology was introduced by the correct selection of atmospheric light. The proper selection of the threshold level eliminates non-atmospheric light intensities. Second, the transmission map estimation has been demonstrated, reducing the analysis and complexity of the calculation to achieve the desired results. The transmission map is estimated using a dark channel prior method that provides appreciated outcomes but contains halo effects. It is therefore imperative to refine the transmission map. The gamma transformation approach estimates the refinement of the transmission map and removing haze progressively. Finally, the radiance of the scene is restored by substituting the refined transmission map and atmospheric light. The unification of this approach intensifying the quality of the images and lessen the computational complexity.
The rest of this paper is organized as follows. Related work and literature review for the visibility restoration model is outlined in Section II. Section III describes a complete description of the proposed method for fog and haze removal method. Sections IV and V deal with experimental results, comparisons, and limitations. Finally, the conclusion is presented in Section VI.

II. RELATED WORK A. VISIBILITY RESTORATION MODEL AND DARK CHANNEL PRIOR
The image deweathering in this work is based on the visibility restoration model derived by Narasimhan and Nayar [1], [2]. The widely used visibility restoration model is as follows: Here, I (x, y) is input image, J (x, y) is the scene radiance, A(x, y) is denoted as global atmospheric light, and t(x, y) is transmission map. It is dependent on the distance between the scene and the camera. It should vary smoothly with different scenic objects at different distances. If the atmosphere is assumed to be homogenous, t(x, y) can be demonstrated as: where, dis(x, y) is the depth of the scene and α represents the scattering coefficient of the atmosphere. Here, J (x, y)t(x, y) and A(1 − t(x, y)) represent the direct attenuation and atmospheric veil. Recovering the scene radiance J (x, y) by estimating of the atmospheric light A(x, y) and the transmission t(x, y) from the hazy image I (x, y) properly. Dark channel prior [7] comes from the observation that most opaque patches in haze-free outdoor images have at least one color channel with some pixels of low intensity. The DCP for an image J is given by, Thus, equation (1) is standardized by atmospheric light A as, The normalization operation is applied independently to the three colour channels. Then the function of the dark channel is calculated from both sides of (4) as: According to the dark channel, the minimal intensity is close to zero mostly due to shadows and dark objects. It consists of the following: Since A c is consistently positive, this leads to Substituting equation (7) for equation (5) allows the transmission map to be directly estimated: where,t(x, y) is initial transmission map. The estimated transmission map directly from DCP has severe artifacts. It is therefore essential to refine the transmit map. To handle this problem, filtering based approaches [10]- [14], and [32]- [38] have employed. These are an efficient method still, there are conspicuous artifacts at the edges shown in Fig 2. The original image and the initial transmission map are illustrated in the Figs. 2(a) and 2(b). The initial transmission map contains halo artifacts. The refinement of the transmission map is therefore essential in order to restore haze and fog free images. Figs. 2 (c) and 2 (d), the refined transmission map and recovered images using the bilateral, guided, and trilateral filter. The guided and trilateral filter significantly reduces the artifacts at the leaf corner compared to the bilateral filter. Detailed descriptions and mathematical analysis of these methods also claim variations within the structural edges.

B. FILTERING BASED APPROACHES
Bilateral filtering is a combination of a domain and range filtering, which preserves the edges while averaging the homogeneous intensity regions. It is calculated by, where, where, p and q is the location of the pixels point. p is the center pixel point in neighborhood. q is a point of pixels around p within the neighborhood. σ d and σ r denote the standard deviation for the domain and range kernel, W p is the normalization factor, g σ d values decreases with increase in pixel distance and g σ r decreases with the influence of intensity difference i.e. (f (q) − f (p)) between pixels p and q. The refined transmission map using bilateral filter is: As shown by the zoom-in regions in Figs. 2 (c) and 2 (d), the refined transmission map and recovered deweathering images by bilateral filtering method have the edge blurring effects such as gradient reversal artifacts across the edge in the leaf. To address this problem, the extension of the bilateral filter was introduced to reduce gradient reversal artifacts. This requires the specification of one parameter σ r and called a trilateral filter. The bilateral filter can be written as derivatives of f (i.e., the gradients): For the subsequent second bilateral filter, the use of smoothed gradient g f (x) instead of ∇f (x) suggested in [25] for estimating an approximate plane, is used for the second weighting. Where, c specifies the adaptive region. Finally, .
w 1 and w 2 are assumed to be Gaussian functions, with standard deviations σ 1 and σ 2 , respectively. The parameter for w 2 is defined as follows: β = 0.15 is used. The refined transmission map using trilateral filter is: The hazy image and its corresponding refined transmission map and the restored images are shown in Figs. 3 (a), (b) and (c), respectively. The trilateral filter reduces irregular gradient artifacts along with preserving the edges. Unfortunately, this filter takes a high computational cost. This issue will worsen when the image size increases since the processing time vary super quadratically with image size. Table 1 shows the processing time compared to the trilateral and bilateral filters. As it is notable from Table 1 that trilateral filter takes a lot of processing time and hence loses its performance credibility; therefore, it has not been considered for the obvious reason.
The guided filter uses to speed up the transmission map refinement [11]. It is derived from a local linear model that generates the filtering output by considering the guidance image. The guided filter also uses the hazy image I for guidance. The refined transmission map using the guided filter is: where, G is the guidance image, µ k and σ 2 k are the mean and variance of G in ω k , |ω| is the number of pixels in ω k and is a regularization parameter. This filter has the edge smoothing property, but still suffers from the artifacts illustrated in Figs. 2 (c) and 2 (d). So, to balance between processing time and handling hazy and foggy images, this work presented a simple but effective method for estimation of atmospheric light and transmission map. Fig. 4 shows the block diagram of the proposed deweathering algorithm.

III. METHODOLOGY
The primary goal of the method is to select the correct atmospheric light and compensate the inequitable transmission map. Figure 4 illustrates the block diagram for the proposed defogging algorithm. First, atmospheric light was estimated based on a threshold method. Next, the initial transmission map is estimated using a dark channel prior method which provides valuable results but contains halo effects. It is thus necessary to refine the transmission map. The transmission map has been refined using gamma transformation and for smoothening, median filtering is used. Finally, scene radiance can be obtained from atmospheric light and the refined transmission map. A detailed description of the methodology is provided in the subsequent subsections.

A. ATMOSPHERIC LIGHT ESTIMATION
Atmospheric light is a key framework to estimate the transmission map. In various single-image deweathering methods, A(x, y) is the brightest pixel in the dark channel prior [7]. The chosen brightest pixels in the dark channel correspond to the densest regions. This method depends on the stipulation of distance differentiation in real scenes consider as the patches shown in Fig. 5 (a). The area in the top blue box is the densest area of estimating A(x, y), while the area in the middle blue box is less hazy. The impact of atmospheric light on a restored image can be observed in Figs. 5 (b) and 5 (c). The top restored image with He et al. [7] in Fig. 5 (d) still has some haze and artifacts. So, A(x, y) is a censorious parameter for recovery of the scene radiance and small variations will lead to detrimental results. With the proposed method, an acceptable restored image marked the bottom blue box in Fig. 5 (d) can be obtained. The approach in [7] estimates the atmospheric light under an assumption of domination of a single A(x, y) for the entire image. However, this assumption is not appropriate if a white surrounding presents in the captured images. The recent research on atmospheric light motivated the proper selection of A(x, y) from the hazy surrounding. In this work, the estimation of A(x, y) will be simply derived from a new perspective. The proposed method for estimating A(x, y) is elucidated as follows. VOLUME 9, 2021 With the visibility restoration model, Eq. (1) can be rewritten as follows: Assuming dis(x, y) → ∞, from equation (18), In (7), atmospheric light A(x, y) can be defined as: Various approaches have been developed to estimate more accurately A(x, y) to assume dis(x, y) → ∞. These structures of the assumptions are always not satisfied with each condition. Mathematical modeling based solution is presented to improve A(x, y). The first step is to find minimum elements of the defined color channel index in the scene. The proper selection of threshold values eliminates non-atmospheric light intensity.
whereÃ(x, y) denote the pixel intensity range of the scene. The second step is to find the brightest pixel intensity values based on threshold range.
Lastly, consider the most spread intensity in the high-intensity area to accurately measure the atmospheric light A(x, y).

B. TRANSMISSION MAP REFINEMENT
The basic idea of the proposed method is to offset the inequitable transmission map from which the visibility of foggy images can be retrieved with great efficiency. The refined transmission map is predominant for restoring the scene radiance. To estimate the refine transmission map, it is required to utilize some assumptions and constraints for an easier solution. From equation (3), a simple solution for estimating transmission map and independently applied to the three color channels on the both sides of the equation (22) as: Since J is a haze-free image, the dark channel of J is close to zero. It is referred to in equation (6): Based on this assumption,t(x, y) in equation (8) can consider as an initial estimation of the transmission. Though, it contains small variation and detail still has uncertainty in identifying the edges as shown in Fig. 6. Fig. 6 (a) shows the original hazy image. Fig. 6 (b) depicts the initial transmission map without soft matting. Figs. 6 (c) and 6 (d) show the transmission map refined with a soft matting and its restored image. It is noticed that the initial transmission map has severe halo artifacts near the edges. The refined transmission map utilizing the soft matting method removes halo artifacts near the edges. However, the restored image from the refined transmission map still contains halo artifacts close to the edges. To address this issue, based on transmission map t(x, y), propose a solution to refine the transmission map. To achieve optimum haze and fog removal results, the gamma transformation and median filtering enhance and smoothen the transmission map. The proposed model is described as: where, T E (x, y) and c are enhanced transmission map and positive constant. In this work, weighting factor w is set to be 0.95. To avoid ambiguity and constructively suppress the noise components at the edges, the nonlinear filtering characteristic in median filter performs well. The following transmission map is: where, T (x, y) is the final extracted edge information of the transmission map. The refined transmission mapT r (x, y) can be obtained as: c and γ are application-specific, the detailed analysis is mentioned in the quantitative evaluation section.

C. ESTIMATING THE SCENE RADIANCE
The atmospheric light and the transmission map are estimated in an appropriate manner, the radiation of the scene can be obtained by solving equation (22): As the scene radiance J (x, y) has been recovered directly prone to noise. The transmission T (x, y) has to be restricted to a lower bound of t o (typically 0.1) to preserve some amount of fog.In addition, a brightness factor of 1.3 is included in the

IV. EXPERIMENTAL RESULTS AND COMPARISON
This section provides a comparison of the proposed approach to several leading-edge techniques, including He et al. method [7], Bilateral filter method [13], Guided filter method [11], Huang et al. method [26], Ju et al. [31], Cai et al. [28], Ren et al. [29] and Berman et al. [27]. The qualitative and quantitative analyses have been performed evaluating the deweathering results of real-world images and synthetic images. In addition, the complexity of the calculations is compared with the basic methods. All the algorithms were processed on Intel (R) Core (TM) i5-3337U CPU @ 1.80Ghz 8.00 GB RAM with MATLAB2017 environment.

A. QUALITATIVE EVALUATION
Generally, poor weather conditions have an impact on image quality. Consequently, several images were taken in adverse weather to assess each method and the proposed method. Images of the real-world (Tiananmen, y01, y16, Pumpkin, Stadium, Swan, Cone, House and Mountain) and synthetic images were taken from the Realistic Single Image Dehazing (RESIDE) dataset [39]. The above mentioned sets of images used for the purpose of simulating the algorithms are shown in figures (Figs. 7 -18).  Figs. 7 (k). However, to get a good quality image, still, halo artifacts were generated. Fig. 7 (e) shows the refined transmission map using Huang et al. method. This method even though preserving the edges of the image, but cannot efficiently avoid the artifact generated in the recovered images as shown in Fig. 7 (l). Fig. 7 (g) shows the refined transmission map using Ju et al. method. It can be seen that the recovered image of dense haze contains halos near edges shown in Fig. 7 (m). Fig. 7 (h) shows the refined transmission map using the proposed method. It preserves the edges and significantly improves the contrast of the image, as shown in Fig. 7 (n).   Figs. 14 (a) and 15 (a) show the original hazy images. All methods substantially preserve edges and increase image contrast. It is noteworthy that Huang et al. and the proposed method have also improved image contrast and preserved its complex edges. In comparison with these existing methods, the proposed method can substantially eliminate haze and fog and maintain contrast in most recovered images.     Compared to recent state-of-the-art techniques [28], [29], [27], the proposed method effectively reduces the amount of fog, while maintaining the pragmatic look. The yellow rectangle areas of the proposed method remove the fog and haze better. In the forest image (Fig. 16 top) the proposed method shows the removal of the fog and preserving edges well. The result [27] shows enhanced results, but still, contain haze and fog in the recovered images. The proposed method shows more details of the background than the discussed methods.
The comparison was further conducted between the proposed method and existing methods which comprised of both synthetic weather degraded images and deweathering images. Figs. 9 (a) and 10 (a) show the ground truth images and transmission map, respectively. The restored results of synthesized hazy images are shown in Figs. 9 (b)-(f) and 10 (b)-(f), respectively. The methods [28] and [29] exaggerated the transmission range and uncover the details properly in the presence of dense haze. The results of [27] and the proposed method maintain the haze-free images and approximately close to the ground truth images. However, in [27] halos effect on transmission map still exist.
An image without fog has a higher contrast than a foggy image because in a foggy image the effect of atmospheric  light decreases the contrast. Contrast gain can therefore be used as a performance measurement for quantitative analysis of fog removal algorithms. Contrast gain can be defined as mean contrast difference between defoggy and foggy image. It is determined in the following manner: where, C Idefog and C Ifog are the mean contrast of the defoggy and foggy images respectively. For an image of size M x N denoted by I (x, y), the mean contrast is given by, where, C(x, y) = S(x,y) and A good fog removal algorithm requires a positive contrast gain. If the contrast gain is very large, the pixels in the output image become saturated. Therefore, with high contrast gain, one also needs to measure the number of saturated pixels. The VOLUME 9, 2021   percent saturated pixels sigma is indicated by: where, m is the amount of saturated pixels. The number of saturated pixels is small, indicating a higher performance of the fog and haze removal algorithm. A further quantitative measure more commonly known as blind contrast assessment (e and r) was added to evaluate the effectiveness of restoration methods. The e and r are defined by: where, n r and n o represent the number of visible edges within the original image and the restored image. The other parameter is r, which is the average gradient ratio prior to and after restoring foggy images.
where, g r and g o correspond to the average gradients in the original and the restored image. From this method, [44] introduced a method to measure the visibility of the images. Another parameter used to analyse the measurement of visual contrast was introduced by Jobson et al. [45]. VCM is defined as: where R v is the number of local areas with a standard deviation greater than the given threshold and R t is the total number of local areas. Generally, the higher the VCM, the clearer the recovered image.      [28], [29], [27].
Here, the Cgain, sigma, VCM , e, and r performance matrices are summarized in Tables 2, 3 Fig. 11. However, the sigma is very high and highlighted by red color. The red highlighted values are not acceptable. More saturated pixels indicate poor quality of restored images. To conclude, a visual inspection shows that the proposed technique is less susceptible to halo artifacts.
In Table 4, the bilateral filter and the proposed method have a higher VCM compared to the other methods. However, most  [28], [29], [27]. of the values are better than the bilateral filter method which suggests that the results are improved. Next, an analysis of the results of Tables 5 and 6, measured on the basis of e and r. Only a few values of e and r are smaller and higher than the method presented. It is clear from the summarized tables the proposed results were approximately the same for few parameters in some figures, but overall performance is better. Therefore, the advantage of the proposed method over the The methodology presented was also assessed with real and synthetic images in Tables 7 and 8. The computed SSIM, PSNR, IVM and VCM values for the three images in Fig. 16 are summarized in Table 7. Note that a large PSNR is a smaller image distortion. The table concludes that the proposed method includes the significant PSNR among all methods. Next, the synthetic results of the dehazing are evaluated using structural similarity (SSIM) and MSE measurements. SSIM is larger indicates better structural similarity between the recovered output and the ground truth image. However, lower MSE indicates more acceptance of the recovered images. From Table 8, the proposed method has the highest SSIM and lowest MSE which indicates the recovered results are similar to the ground truth among all the methods.
In addition to the higher perceptual quality, the main advantage of the proposed method is the low complexity of the calculation, thus reducing processing time. Equations 3, 26 and 31 used in this approach are simple operations to restore deweathering images. In [7], the soft matting approach is computationally expensive. The bilateral filter [13] is a problem of efficiency, since the implementation of the brute-force is computationally high. In comparison with the He et al. method and the bilateral filter method, [11] and [26] significantly reduce the processing time. The estimated cost of [31] is more due to the computation of the gradient operation for an unknown constant. In order to demonstrate the efficacy of the proposed method, a comparison of the processing time between different dehazing techniques at different resolutions is given in the Fig. 19.

V. LIMITATION
The proposed works perform well in hazy and foggy conditions. However, like the previous methods, a limitation of the proposed method is that cannot handle a dense fog as shown in Fig. 20. Since dense fog is a severe interference with atmospheric light (which is not constant), the visibility restoration model is not suitable for this purpose. Fig. 20 provides an example where the proposed method fails to produce  . (b)-(j) the results of Bilateral [10], Guided [11], Huang et al. [26], Tan [5], Meng et al. [20], Berman et al. [27] and proposed method. a clear image. Future work will address this fog-free baseline research issue based on an imaging model.

VI. CONCLUSION
In this paper, an efficient and faster image restoration method was presented. This proposed method makes use of thresholding and gamma transformation to estimate atmospheric light and refine the transmission map. The processing of this method is less time-consuming than other methods. The advantage can be explicitly notable in contrast, which is relatively high and the percentage number of saturated pixels is less in comparison with other methods. The proposed algorithm being faster and producing outputs with enhanced contrast and lesser percentage of saturated pixels. Thus the proposed method is qualitatively and quantitatively better than the state-of-the-art methods. Therefore, this approach will be useful in real-time systems like surveillance, intelligent vehicles, remote sensing, terrain classification, and so on.