Adaptive Dark Channel Prior Enhancement Algorithm for Different Source Night Vision Halation Images

The existing enhancement algorithms amplify the halation area and noise when enhancing the night vision halation image. Therefore, this paper proposes an adaptive dark channel prior (ADCP) enhancement algorithm for the different source night vision halation image. The algorithm constructs an adaptive transmittance function according to the relationship between the initial transmittance and the critical gray value of halation. The function can automatically adjust the transmittance according to the halation degree in the night vision image, which ensure the ADCP algorithm to achieve the adaptive enhancement of the images. The experimental results show that the proposed algorithm can effectively improve the clarity and contrast of visible and infrared images in night vision, and avoid over-enhancement of the halation region of visible images. When the proposed algorithm is applied to the anti-halation processing of different source night vision image fusion, the halation elimination of the fused image is more complete, the details of the dark area such as edge, brightness and color are moderately improved, and the overall visual effect is better than the existing enhancement algorithms. The effectiveness and universality of the proposed algorithm are verified for processing different night vision halation scene images.


I. INTRODUCTION
Abusing the high beam when meeting at night can easily cause dazzle to the opposite driver making him unable to find and deal with the road conditions ahead in time, which can easily lead to traffic accidents. According to statistics, the accidents related to the improper use of high beam lamps have reached 30% ∼ 40% of the total traffic accidents at night, and this proportion has a rising trend [1].
The existing active anti-halation methods to solve the blinding night vision of drivers mainly include: adding polarization film on the front windshield [2], [3], infrared night-vision scope [4], [5], [6], array CCD image sensor with independent and controllable pixel integration time [7], fusion of two visible images with different optical The associate editor coordinating the review of this manuscript and approving it for publication was Yongjie Li.
integral time [8], infrared and visible different source image fusion [9], etc. Among them, the night vision anti-halation method of different source image fusion combines the advantages of non-halation in the infrared image and the rich color details in the visible image. The resulting fused image has more complete halation elimination and better visual effect. In the night-vision halation image, the high-brightness of the halation area causes the illumination of the remaining dark areas to be further reduced, and the details to be more difficultly observed. In order to have richer color, texture and other details in the fused image, it is necessary to enhance the image before fusion.
However, in the pre-processing stage of image fusion, the existing image enhancement algorithm amplifies the noise and halation area while improving the details in the dark area, which affects the quality of the fusion image. Reference [10] uses the Retinex-based enhancement algorithm to improve the clarity of the night vision anti-halation fusion image. However, the method amplifies the noise and produces distortion. The local histogram equalization (LHE) enhancement algorithm [11] has a better effect on images with too bright or dark foreground and background, but it will magnify the halation area when used to enhance low illumination images with halation. The homomorphic filter (HF) enhancement algorithm [12] is suitable for enhancing images with uniform ambient light, while is not suitable for enhancing nighttime images, especially not for enhancing the low illumination image with halation. The dark channel prior (DCP) enhancement algorithm [13] can effectively enhance the details in low illumination images. However, it will enlarge the halation region in processing the low illumination images with halation.
This paper proposes an ADCP enhancement algorithm to solve the problem that the existing image enhancement algorithms are not suitable for enhancing night-vision halation images. The proposed method constructs an adaptive transmittance function according to the relationship between the initial transmittance and the critical gray value of halation. The function can automatically adjust the transmittance according to the halation degree of the night vision image, which ensure the ADCP algorithm to achieve the adaptive enhancement of the images. The proposed method can avoid excessive enhancement of the halation area while improving the dark detail information of the visible image and the definition of the infrared image. Therefore, it efficiently enhances the quality of the anti-halation fusion image.
The remaining of the article is arranged as follows. Section II presents the principle of the proposed algorithm. Section III describes a step-by-step design of the proposed algorithm. Section IV gives the experiential results and analysis. Section V represents a discussion of results for different enhancement methods. Lastly, Section VI represents the conclusion.

II. PRINCIPLE OF ADAPTIVE DARK CHANNEL PRIOR ENHANCEMENT ALGORITHM FOR NIGHT VISION HALATION IMAGE
Night vision halation images belong to typical backlighting images with low illumination and strong light sources. The brightness of the halation area is quite high, which makes the effective information drowned by strong light; while the brightness of the non-halation area is too low to observe the information in dark area. The infrared image is not affected by the high lights and can clearly show the outline of road, vehicles, and pedestrians. However, it's a gray-lever image, and the information, such as vehicle color and traffic lights, is missed, which is not conducive to driving safety at night.
Since the gray distribution of the low illuminance inverted image is similar to that of the fog image, the principle of dark primary color prior defogging is applied in [14] to the low illuminance image enhancement, and good results are achieved. However, when the method is used to enhance the night vision halation image, it will enlarge the halation area as enhancing the details of the dark area. This makes the halation more serious, causing excessive enhancement, as shown in Fig. 1.
Our study found that there is a strong correlation between the transmittance of the DCP algorithm and the halation degree of enhanced image, and the change of transmittance directly affects the image enhancement effect. For the same halation image, when the transmittance is adjusted to increase continuously, the halation area of the enhanced image will increase accordingly. When the halation area of the image is small, the brightness is better in the enhanced image obtained by using the initial transmittance. When the halation of the VOLUME 10, 2022 image is serious, it is better to use a smaller transmittance to enhance the image.
Therefore, this paper reversely sets the appropriate transmittance to enhance the image according to the halation degree of the night-vision image. The proposed approach can improve the details of the dark area and avoid the excessive enhancement of the halation area. Firstly, the adaptive coefficient is determined according to the gray difference between the halation and the non-halation areas of the night vision image. Then, the halation critical gray value is iteratively calculated. By combining the critical gray value with the transmittance, a transmittance function adjusted by the image halation intensity automatically is constructed to enhance the image. The ADCP algorithm can determine the critical gray value according to the halation area of the image, and further automatically adjust the transmittance to realize the adaptive enhancement of the night vision halation image.

III. DESIGN OF ADAPTIVE DARK CHANNEL PRIOR ENHANCEMENT ALGORITHM
According to the above principle of ADCP enhancement algorithm, the original image is divided into halation and non-halation regions by determining critical gray value. A new transmittance function is constructed, which can automatically adjust transmittance according to different regions. By this way, it realizes the adaptive enhancement of the different source night vision halation image. The overall block diagram of ADCP algorithm is presented in Fig. 2.

A. DETERMINATION OF CRITICAL GRAY VALUE G c OF HALATION
The night vision halation image is converted to gray space [15], the gray value of the halation area is significantly higher than that of the non-halation area. Therefore, the critical gray value G c of the intersection between the halation and the nonhalation areas need to be determined, and divide the night vision halation image into non-halo area R 1 and halation area R 2 . G c is obtained by designed adaptive iterative threshold method.
Calculate the i-th gray threshold T i+1 : among them, In(j) (2) where m i is the adaptive coefficient of the i-th iteration. m i is automatically adjusted with the degree of halation. µ 1 and µ 2 are the gray mean of two regions segmented by threshold T i , respectively. In(j) is the gray value of the j-th pixel, 0≤ j ≤ L. L is the total number of pixels. L 1 and L 2 are the numbers of pixels of two regions, respectively, and L = L 1 + L 2 . Take the initial gray threshold T 0 : where In max and In min are the maximum and minimum gray values of pixels, respectively. The iterative calculation is carried out through Equations (1) to (4). The latest threshold is the halation critical gray value G c until the threshold is no longer changed.

B. DETERMINATION AND OPTIMIZATION OF ADAPTIVE COEFFICIENTS
In order to minimize the number of iterations in G c calculation, the adaptive coefficient m needs to be adjusted automatically according the halation degree of the image. The degree of halation represents the richness of high-brightness information in halation area. The amount of high-brightness halation information is positively correlated with the area of the halation, and negatively correlated with non-halation. Therefore, a mathematical model that adaptive coefficient m changing with area ratio s between halation and non-halation regions can be established.
Considering the halation images at night as the research object, more than 6200 visible and infrared images are collected from urban trunk roads, residential roads, suburban roads and rural roads, covering the majority of roads. The halation areas of the images change from small to large and then smaller as the vehicles come closer.
For determining the order and variation trend of the model of adaptive coefficient m preliminarily, more than 2100 images are selected at a certain interval in the collected image sequence. By determining the adaptive coefficient m and area ratio s of each image, we get a set (s i , m i ) and draw a scatter plot. It has been discovered that there is a negative correlation between m and s, and they are close to the following decreasing function: where a, b and c are the constant parameters to be identified. The nonlinear least square method is used to estimate the three parameters [16], and the error sum of squares Q is: where f is the nonlinear model of parameter θ. N is the number of sample images. When Q reaches the minimum, the estimated values of parameters a, b and c are −0.6701, 0.0741 and 1.1750, respectively. The obtained fitting curve is called the baseline: The more sampling points collected in set (s i , m i ), the more accurate the fitting curve will be. In the remaining image sequences, the samples located outside the baseline are selected to expand outwards until the segmentation effect of the halation region meets the critical of human eye observation. The boundary points distributed on the upper and lower sides of the baseline are fitted again with Equation (5-6), which obtain the upper and lower boundary curves. All the segmentation effects within the upper and lower boundary curves meet human visual effects. To obtain the optimal segmentation effect, the corresponding points in the upper and lower boundaries of the baseline are averaged to obtain the set (s i ,m i ). Then, the optimal curve obtained by fitting is [17]: The fitting result of adaptive coefficient m is shown in Fig. 3. The indexes of the sum of squared errors (SSE), root mean square error (RMSE) and fitting degree R 2 are used to evaluate the goodness of fit of each curve. Table 1 shows the goodness of fit results of baseline, upper and lower bounds curves and optimal curves.
It can be seen from TABLE 1 that the SSE of the optimal curve is nearly one order of magnitude lower than that of the other three curves, indicating that the curve fitting is better. The RMSE decreases by nearly 50%, indicating that the fitting error is small. The R 2 also increases from 0.95 to 0.99, which is close to 1 and indicates that the fitting degree of the regression curve is better.

C. THE DARK CHANNEL PRIOR ENHANCEMENT OF NIGHT VISION HALATION IMAGE 1) BASIC THEORY OF DARK CHANNEL PRIOR ENHANCEMENT
The DCP algorithm is proposed based on the statistics. By observing the dark channel data of thousands of fog-free images, it is found that there is always one or more-color channels with low intensity in local of most images. This indicates that the illumination intensity of those local is a very small approaching 0 [18]. The dark primary color J dark (x) in the neighborhood centered on pixel x can be expressed as: where (x) is the square neighborhood centered on pixel x. The smaller the (x) selected, the more detailed the enhancement effect, and the richer the retained information. J c (y) represents the intensity of c channel in y-th pixel. J dark (x) tends to 0. According to DCP theory, the image quality is related to ambient lighting conditions and physical reflected light. Hence, its model is expressed as: where I (x) is the input image on foggy day. J (x) is the output image. t(x) is the transmittance, represents the intensity of the reflected light passing through the atmosphere in the environment. A is the global atmospheric light, usually taking the maximum intensity of all pixels in the image, or the mean of the first 0.1% larger intensity. Then, the defogging image J (x) is expressed as: To enhance I inv (x) by ADCP, it is necessary to estimate atmospheric light constant and transmittance. The intensity of pixels in halation and non-halation areas are different greatly. The pixel with largest intensity is usually located in the halation area formed by vehicle headlights, while the darkest pixel may be located in any position at non-halation area. In order to smooth the enhanced image, the estimation of A inv should consider both the brightest and the darkest pixels. After the actual test, the average intensity of n pixels that brightest 0.1% and darkest 0.1% is taken as the estimated value of A inv : where I highest (k) and I lowest (k) are the k-th high-intensity and low-intensity, respectively. The initial transmittance t inv (x) is calculated as: where w is the scaling factor, w ∈ (0,1), which retains a small amount of halation information, so that the enhanced image has a certain depth of field. To ensure the universality of w, we adjusted w and observed the subjective effect of the enhanced images in different scenes. It is found that the enhancement effect is better as w ∈ [0.94,0.98], so w is set as 0.96 in this paper. The averf represents mean filtering, which is used to replace the minimum filter in Equation (9) to calculate the dark channel [19]. It can significantly reduce the algorithm complexity and avoid the boundary ambiguity caused by the minimum filter.
In addition, to ensure more details, higher resolution and contrast in non-halation region R 1 , the neighborhood (x) should be set relatively small. However, the (x) in halation region R 2 should be set relatively large to avoid amplifying the halation area. By adjusting (x) and comparing the enhancement results, we have: A new transmittance function T inv (x) is constructed as: The transmittance T inv (x) is adjusted according to the gray G x at pixel x and the halation critical gray G c , ranging in (0, 1). When G x ≥ G c , it indicates that pixel x is located in halation area and the transmittance needs to be adjusted adaptively. When G x < G c , pixel x is located in non-halation area and the initial transmittance is used.
The image J inv (x) processed by ADCP is: Finally, the enhanced night-vision halation image J c (x) can be obtained: Stepwise visualization of the overall framework is shown in Fig. 4:

IV. EXPERIMENTAL RESULTS
The proposed algorithm is tested on 4 typical roads covering most of the nighttime traffic i.e., urban trunk road, residential road, suburban road, rural road. The enhancement effect for the night vision halation images and its effectiveness in anti-halation processing of different source image fusion are evaluated.

A. ENHANCEMENT RESULTS AND ANALYSIS
In 8 groups halation scenes on above 4 roads, a comparative performance analysis among different algorithms (LHE, HF, MSR, DCP, and the proposed ADCP) is carry out in terms of brightness, clarity and effective information. We adopt a combination method of subjective and objective to evaluate the enhancement effect. The objective indexes include mean absolute error (MAE) [20], average gradient (AG), information entropy (IE), structural similarity (SSIM) [21] and peak signal-to-noise ratio (PSNR) [22]. The MAE is used to describe the brightness change of the image. The AG describes the definition of the image. The IE reflects the richness of image information. The PSNR and SSIM reflect the distortion degree and similarity between the enhanced and the original images, respectively.

1) URBAN TRUNK ROAD
There is light scattered from street lamps and surrounding buildings on the urban trunk road, and the oncoming vehicle drives with low beam lights. In original visible image, the road can be seen around the halation area, but it is difficult to observe the pedestrians and road condition in the dark area. In the infrared image, the contours of car and pedestrians are visible but the color is missing. When there   is only one oncoming vehicle, the halation area formed by headlights is small. When there are multiple vehicles, the halation area formed by each headlight is connected as large halation areas. The original images and enhancement results of small and large halation areas on urban trunk road are shown in Fig. 5 and Fig. 6 respectively, and the corresponding objective indexes of enhancement images are shown in Table 2.
From Fig. 5, Fig. 6 and TABLE 2, after LHE enhancement, the MAE is the largest. the halation area is obviously enlarged in the visible image; the features of vehicle and pedestrian are severely lost in infrared image. Meanwhile, the IE is the largest too. The brightness of the dark area is good, and the pedestrians, trees and curbs can be seen clearly in the visible image. The outlines of background such as buildings and road traffic signs are clear in infrared image.   After HF enhancement, the dark area information is slightly improved in the visible image, and the visual effect is poor. Because the enhanced image has less overall change and is more similar to the original image, the PSNR and SSIM are the highest.
After MSR enhancement, the pedestrians and vegetation along the road have obvious color improvement in the visible image. Its halation areas have no excessive enhancement. But the noise is obvious, which leads to false high of AG. The definition is further improved than that of LHE and HF in the infrared image, but the noise is serious and affects the visual effect.
The processing results of the DCP algorithm are quite different in 2 halation scenes. In small halation scene, the dark area information has improved in the visible image. In large halation scene, the halation area is enlarged, and the brightness of dark area is inconspicuously improved.
The brightness is moderately improved in the visible image enhanced by the proposed ADCP. The halation area is not amplified while the detail information of dark area is significantly improved. Moreover, the infrared image has no noise. All indexes are balanced, and the overall visual effect is good.

2) RESIDENTIAL ROAD
There is weak light scattered from buildings on residential road at night, which is darker than the overall environment of urban trunk road. The oncoming vehicle drives with low beam lights. When the vehicle is far away, the halation area is small. The halation areas become larger and brighter as the vehicles approach. It is difficult to observe the pedestrians and road conditions in dark areas except the halation area in the visible image. The outlines of vehicles, pedestrians and surrounding buildings are visible in the infrared image, but the definition is low and other details are missing. The original images and enhancement results of small and large halation area of residential road are shown in Fig. 7 and Fig. 8 respectively, and the corresponding objective indexes of enhancement images are shown in Table 3.
From Fig. 7, Fig. 8 and TABLE 3, After LHE enhancement, both visible and infrared images appear excessive enhancement, resulting in false high of the MAE.
The image processing results are quite different in the 2 halation scenes after HF enhancement. In small halation scene, the enhanced image is inconspicuously improved, and its overall difference is small with the original image, so the SSIM is high. In large halation scene, the halation area is enlarged obviously, the MAE is high.
The brightness is improved and the halation area remains unchanged in the visible image enhanced by MSR. The details of the enhanced infrared image such as contour, color is significantly improved. All indexes are moderate, but there is too much noise and the visual effect is poor.  After DCP enhancement, in small halation scene, the brightness of dark area is slightly improved in the visible image, while the pedestrians are almost invisible. In large halation scene, the halation area is enlarged at a small degree.
Moreover, the MAE of the image enhanced by ADCP is moderate, the other indexes are good, and the visual effect is optimal.

3) SUBURBAN ROAD
There is almost no other light except the headlights, and the overall illumination is very low. The oncoming vehicle drives with high lights. The high-brightness halation of visible image makes it difficult to observe the information of other dark areas, which is easy to cause traffic accidents. The outlines of vehicles and pedestrians are visible in the infrared image, but the definition is low. The original images and enhancement results of small and large halation area of suburban road are shown in Fig. 9 and Fig. 10 respectively, and the corresponding objective indexes of enhancement images are shown in Table 4.
From Fig. 9, Fig. 10 and TABLE 4, the MAE is high after LHE enhancement. The visible image is excessive enhanced so that the previously clear license plate becomes illegible. The local is overexposed in the infrared image, and the noise is introduced, which leads to false high of the AG.
In small halation scene, the effect of the dark area is poor in the visible image enhanced by HF. Its overall difference is small with the original image, so the PSNR and SSIM are high. In large halation scene, the over-enhancement phenomenon is serious.
The MSR algorithm does not amplify the halation area, and the pedestrians and license plates are remarkable. But the noise is obvious.
After DCP enhancement, in small halation scene, the details of dark area are slightly improved. In large halation scene, the halation area is amplified.
After ADCP enhancement, all indexes are balanced and visual effect is good. In the visible image, the overall brightness is improved moderately. The halation area remains the same. The pedestrian and license plate are clear. The definition is high in infrared image.

4) RURAL ROAD
There is less light and generally darker on rural roads than on suburban road. there are almost no other details except halation in the visible images. The contours of pedestrians VOLUME 10, 2022   and vehicles are blurred, and other details are difficult to obtain. The original images and enhancement results of small and large halation area of rural road are shown in Fig. 11 and Fig. 12 respectively, and the corresponding objective indexes of enhancement images are shown in Table 5.
From Fig. 11, Fig. 12 and TABLE 5, the MAE is high after LHE enhancement. The visible image is over enhanced. The local is overexposed in the infrared image.
After HF enhancement, in small halation scene, the enhancement effect on dark area is poor, and its overall difference is small with the original image, so the PSNR and   SSIM are high. There exists excessive enhancement in large halation scene.
The enhancement of the image by HF algorithm on rural road scene is consistent with that of on suburban road scene. The enhancement of the dark area is poor in small halation scene. However, the halation area is over enhanced in large halation scene.
After MSR enhancement, the halation area remains the same. But too much noise leads to false high of the AG.
The DCP algorithm has the same image enhancement effect on rural and suburban road scene. The details of dark area are slightly improved in small halation scene. The halation area is amplified in large halation scene. VOLUME 10, 2022    All indexes are balanced after ADCP enhancement, and the visual effect is optimal.

B. RESULTS OF ANTI-HALATION OF DIFFERENT SOURCE NIGHT VISION IMAGE FUSION
The fusion evaluation is carried out in the most common halation scenes (small halation on urban trunk road, small halation on residential road, large halation on suburban road, and large halation on rural road). The visible and infrared images enhanced by above 5 algorithms are fused with the improved IHS-Curvelet transform algorithm [23]. Fig. 13-16 shows the anti-halation fusion results of visible and infrared images processed by 5 algorithms in the 4 halation scenes, respectively.
From Fig. 13 to 16, it is observed that the definition is significantly improved in the fusion images enhanced by LHE. However, due to excessive enhancement of halation area, the halation still exists in vehicle, resulting in serious distortion. The overall fused image enhanced by HF is dark and low contrast. The halation is eliminated in the fusion images enhanced by MSR, the edges are clear, and the details are better improved. But the noises are large, the fusion images are quite different in various halation scenes, and the universality of MSR is not enough. The halation elimination is not complete enough in the fusion image by enhanced DCP, and the visual effect is poor. The halation is eliminated in the fusion images enhanced by the proposed ADCP, the details such as the brightness and the color are  moderately improved in the dark area, and the overall visual effect is good. The results show that the proposed ADCP algorithm is better than the other 4 enhancement algorithms in improving the anti-halation fusion image in different halation scenes.
To objectively evaluate the quality of anti-halation fusion image and avoid the influence of high-brightness halation, a quality evaluation method based on adaptive partition is adopted [24]. The halation elimination index D is used to evaluate the elimination effect in the halation area. Mean µ, average gradient AG, spatial frequency SF, cross-entropy [25] CE FU−VI and CE FU−IR , mutual information [26] MI FU−VI and MI FU−IR , and edge retention Q AB/F are used to evaluate the quality of the non-halation area in term of characteristics, information retention and human visual effects. The radar chart is drawn to analyze the anti-halation fusion image directly. Since the smaller the CE index is, the better the quality of the fusion image is, it is convenient to convert it into CE −1 for data analysis. Fig. 17 demonstrates the radar chart of the indexes of the fusion images under 4 halation scenes. Fig. 17 shows that the indexes of anti-halation fusion images obtained by different enhancement algorithms differ significantly, indicating that different algorithms have different impacts on the quality of fusion images. The fusion image evaluation indexes of the proposed enhancement algorithm are relatively good in different halation scenes, and the area surrounded by the radar diagram is the largest in the same halation scene. Thus, the quality improvement and human visual effect of the fusion image are relatively good after the enhancement by the proposed ADCP algorithm.

V. DISCUSSION
The results and analysis in the above experiments show that the LHE algorithm enhances the details of the dark area in visible images, while the halation area is also enlarged. The HF algorithm is insufficient to enhance the dark information in small halation scene, and there is over-enhancement phenomenon in large halation scene. The image enhanced by MSR algorithm has no halation and good contrast, but the noise is obvious and reduces the visual effect. The DCP algorithm has a relatively good effect on small halation scene, but there is over-enhancement phenomenon on large halation scene. However, the proposed ADCP algorithm has a better overall visual effect, and all indexes are balance. The algorithm achieves a good balance between limiting the amplification of halation area and improving the details of dark area, which is more suitable for the enhancement of different night vision halation images.
The anti-halation results of different source night vision image fusion further reveal the effectiveness and universality of the ADCP algorithm in various halation scenes.

VI. CONCLUSION
The existing enhancement algorithms amplify the halation area and noise when enhancing the night vision halation image. To resolve this problem, this paper proposes an ADCP enhancement algorithm for different source night vision halation images. Considering the gray difference of the images with different halation degrees, the function is established between the critical gray value of the halation image and the transmittance of the DCP algorithm, and the adaptive enhancement of the night vision halation image is realized. The experimental results of 8 different halation scenes on 4 typical roads show that the proposed ADCP algorithm can not only effectively improve the clarity and contrast of the images in night vision, but also avoid over enhancement of the halation area in visible images. The fusion experiment results of four common halation scenes show that when the ADCP algorithm is applied to the anti-halation processing of night vision image fusion, the halation elimination is more thorough, the details such as edge, texture and color are moderately improved, and the overall visual effect is good. The results verify the effectiveness and universality of the ADCP algorithm in the anti-halation processing of the different source image fusion.
QUANMIN GUO was born in Weinan, Shaanxi, 16 China, in 1974. He received the B.S. degree in 17 industrial automation and the M.S. degree in con- 18 trol theory and control engineering from the Xi'an 19 University of Architecture and Technology, in 20 1997 and 2004, respectively, and the Ph.D. degree 21 in mechanical engineering from the Xi'an Uni- 22 versity of Technology, in 2018. He is currently a 23 Professor with the School of Electronics Informa- 24 tion Engineering, Xi'an Technological University. 25 He has published over 30 articles. His research interests include intelli-