A Novel Two-Step Strategy Based on White-Balancing and Fusion for Underwater Image Enhancement

Images captured from underwater environment always suffer from color distortion, detail loss, and contrast reduction due to the medium scattering and absorption. This paper introduces an enhancement approach to improve the visual quality of underwater images, which does not require any dedicated devices or additional information more than the native single image. The proposed strategy consists of two steps: an improved white-balancing approach and an artificial multiple underexposure image fusion strategy for underwater imaging. In our white-balancing approach, the optimal color-compensated approach is determined by the sum of the Underwater Color Image Quality Evaluation (UCIQE) and the Underwater Image Quality Measure (UIQM). We get an optimal white-balanced version of the input by combining the well-known Gray World assumption and the optimal channel-compensated approach. In our artificial multiple underexposure image fusion strategy, first the gamma-correction operation is adopted to generate multiple underexposure versions. Then we propose to use ‘contrast’, ‘saturation’, and ‘well-exposedness’ as three weights, to be blended into the well-known multi-scale fusing scheme. Images enhanced by our strategy have a better visual quality than some state-of-the-art underwater dehazing techniques, through our validation with a wide range of qualitative and quantitative evaluations.


I. INTRODUCTION
The utilization and exploitation of various marine creatures and resources have been a hot-issue recently. Besides photography and video recording, underwater imaging has been applied to various work tasks and scientific discoveries, such as underwater artificial-facility monitoring [1], underwater object detection [2], marine creatures discovering [3], and underwater vehicles controlling [4]. However, images directly captured from underwater environment always suffer from severe degradation, such as undesired color-cast, contrast reduction, and detail loss [5], [6] caused by light scattering and absorption, which seriously limit the acquisition of available information from the image [7]- [10]. Therefore, The associate editor coordinating the review of this manuscript and approving it for publication was Jiajia Jiang . acquisition of clear and accurate images is an important prerequisite to help scientists understand the underwater environment.
Numerous approaches have been proposed to improve the visibility of underwater images. For example, some researchers proposed to use the dedicated hardware devices [11], [12], or polarization-based methods [13], [14] to enhance degraded images. Even though the performance by using these methods was excellent, limitations still existed. These methods were limited by extremely expensive hardware devices, or were not applicable to video acquisitions and dynamic imaging occasions. Besides, some researchers proposed to employ multi-images-based fusion techniques [15], [16] to improve the visual quality of the scenario. However, the extremely difficult operation that acquiring multipleversions of one scene in underwater imaging is impractical for common users. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ With the development of research, investigators paid more attention to proposing single image dehazing methods that need no additional hardware devices or complex operations to assist. The underwater single image dehazing techniques can be roughly categorized into three branches: underwater image enhancement methods, underwater image restoration methods, and data-driven methods [17]. The underwater image restoration methods always require additional prior knowledge to reconstruct the degraded image. The recently proposed data-driven methods generally have high requirements on hardware devices and training dataset. The disadvantages of these methods often make them out-of-operation in general underwater imaging occasions.
In order to propose an applicable enhancement method to improve the visual quality of underwater images, we adopt a 'Two-Step' strategy, which includes an improved whitebalancing approach and an artificial multiple underexposure image fusion strategy.
The purpose of the first step in this strategy is to remove the color distortion from underwater images. In view of the underwater long-wavelength light attenuation phenomenon, we merely pay attention to the following five approaches of channel compensation: viz. compensation of red channel from green channel, red from blue, red from green and blue, red and blue from green, red and green from blue. In addition, the Underwater Color Image Quality Evaluation (UCIQE) [18] and the Underwater Image Quality Measure (UIQM) [19] are two quantitative evaluation indicators which are specifically designed for evaluating the quality of underwater color images. And according to the analysis result obtained from Hou et al. [20] that they ranked 2 nd , 3 rd place from related non-reference quantitative indicators respectively in the computation of Pearsonlinear correlation coefficient (PLCC) and Spearman Rankorder correlation coefficient (SROCC). Therefore, we use both UCIQE and UIQM to evaluate five channel-compensated approaches respectively and choose the optimal approach of compensating the color channel that with the largest sum of two evaluation indicators. Then the Gray World assumption [21] is used to generate the improved white-balanced image combined with the optimal channel-compensated approach.
The second step of our strategy aims at enhancing contrast and recovering details. Observing the phenomenon that different exposures in the same scene can reveal details better in areas with different brightness. In a short-exposure image, details in the bright areas are preserved well, but details in the dark areas are almost disappearing. Meanwhile, a completely opposite phenomenon is observed in a long-exposure image. Scholars therefore proposed the fusion strategy of multiexposure versions that expressed details well both in dark and bright areas [22]- [24]. Unfortunately, acquiring different exposure versions of one scene by adjusting the shutter speed is difficult to operate in underwater environment. As a consequence, Galdran et al. [25] proposed the Artificial Multi-exposure Fusion Strategy (AMEF): they utilized the gamma-correction operation to generate multi-exposure versions from the input image, then the multi-exposure versions with two weights: 'contrast' and 'saturation', were blended into the final result through the multi-scale Laplacian fusing scheme. Besides, Zhu et al. [26] proposed an artificial multi-exposure image fusion strategy recently. They also used the gamma-correction operation to get multi-exposure versions from the input.
Then they constructed the weight-maps by computing both global and local exposedness to guide the fusion process. However, both strategies were specifically designed for defogging atmospheric images. Results obtained from their strategies still suffered from severe color distortion. In addition, we also observed that their fusion results were not in the optimal exposure, which decreased the local contrast in dark area.
Therefore, we first draw lessons from the utilization of the gamma-correction operation that generating multiple underexposure versions from underwater single image. Then, we replace the weight-maps for the multi-scale fusion process in order to enhance underwater images with better visual quality. Fig. 1 shows the enhanced results obtained from AMEF [25], Zhu et al. [26], and our strategy.
The main contributions of this paper are summarized as follows: 1) A novel strategy for improving the visual quality of underwater single image is proposed in this paper, which includes an improved white-balancing approach and an artificial multiple underexposure image fusion strategy. In terms of objective and subjective evaluations, the proposed strategy produces results that are superior than some of state-of-the-art techniques. 2) Non-reference quantitative assessments are applied to generate the optimal white-balancing approach on underwater images for the first time. The improved white-balancing method performs better than some existing white-balancing methods in removing color distortion and enhancing contrast through our validation.
3) The replaced weights are blended into the popular multi-scale fusing scheme, to enhance underwater images with better visual quality than some existing multi-exposure fusion approaches. 4) The proposed 'Two-Step' strategy is not only for dehazing underwater images, but also suitable for dehazing some fogged, low-light and natural images. In addition, the strategy is also applied for increasing the number of matched pairs in Local feature points matching. The rest of the paper is structured as follows. In section II, we briefly survey the light propagation in underwater environment and the mainstream approaches for dehazing underwater single image. Section III introduces the process of the proposed strategy. The experimental results are discussed in section IV. The content of conclusions and future work are summarized in Section V.

II. RELATED WORKS
In this section, we briefly survey the underwater imaging model and the mainstream approaches for dehazing underwater single image.

A. UNDERWATER IMAGING MODEL
The main difference between underwater images and regular images is that underwater images always suffer from effects of light scattering and absorption. The scattering can cause detail loss, and the absorption process will result in color distortion, and contrast reduction [5]. The absorption process is also closely related to the wavelength of light: light with shorter wavelengths can reach deeper depths than the one with longer wavelengths, which causes underwater images generally perform in a typical bluish or greenish tone. The selective attenuation characteristic in water is shown in Fig. 2, where red light degrades seriously after 5 − 6 m, followed by orange light, yellow light, green light and blue light. Besides, the model of light propagation in underwater environment is not only influenced by the characteristics of the imaging target and the controlled light source. It also depends on a lot of uncertainties such as the light incidence angle from the sun, surge condition, diving location, submerged depth even the type and concentration of phytoplankton.
McGlamery [27] and Jaffe [7] initially proposed a famous underwater imaging model that underwater imaging process can be represented as the linear superposition of three main components: the direct component, the forward-scattering component and the back-scattering component. The direct component E d denotes the light energy directly reflected from the target object into the camera. The forward-scattering component E f denotes the light scattered by floating particles but still reach the image plane. And the back-scattering component E b denotes the light coming from surroundings and reflected by floating particles without reaching the camera. When taking photos in underwater environment, camera is always quite close to the target, which means the direct component E d can be ignored in most of computational process. Consequently, the underwater imaging model can be simplified expressed as follows: where The mainstream of underwater single image dehazing methods can be roughly divided into three branches: the restoration methods based on the prior knowledge, the data-driven methods based on deep-learning techniques and the enhancement methods based on spatial/ frequency domain transformations or fusion strategies.
The most representative technique of the underwater single image restoration branch is the Dark Channel Prior (DCP) method proposed by He et al. [28], which was initially utilized to dehaze fogged images. DCP assumes that the radiance of image has very low intensities in one color-channel, and consequently defines regions of small transmission as the ones with large minimal value of colors. It performes effectively in defogging atmospheric images but not well in dehazing underwater images. Then, several algorithms inspired by DCP were proposed to dehaze underwater images. Chiang et al. [29] proposed to blend the traditional DCP and a color-compensation method for the purpose of improving the visibility of underwater images. Although the approach could compensate the wavelength-attenuation and enhance contrast, the quality of restored underwater images decreases dramatically when inputs are acquired in turbid occasions. Galdran et al. [30] proposed the Red Channel Prior (RCP) method to remove undesired color cast and enhance contrast, but it requires a lot of additional prior knowledge. The wellknown Underwater Dark Channel Prior (UDCP) was proposed by [31], which makes the traditional DCP method suit for dehazing underwater images and get better transmission estimation. Unfortunately, it could not work in some severer color-distorted occasions. Li et al. [32] proposed an effective enhancement method based on histogram distribution prior and minimum information loss, which could enhance contrast and brightness of underwater images. Recently, Yang et al. [33] proposed a reflection-decomposition-based transmission map estimation method to reconstruct the underwater image. These methods indeed improved the visual quality of underwater images. However, this branch of methods generally required lots of additional prior knowledge, which is hardly acquired for majority of common users.
In recent years, more and more superior achievements in image segmentation [34], super resolution [35] and object detection [36] have been made based on deeplearning techniques. In terms of dehazing underwater images, deep-learning based methods also made a lot of contributions [37]- [40]. Wang et al. [39] proposed the popular Convolutional Neural Network (CNN), which enhances brightness and contrast of the input, but results in over red-compensated on underwater images easily. Li et al. [40] proposed an underwater image enhancement convolutional neural network model based on underwater scene prior (UWCNN), which improves the visibility of underwater images, but has a quite high requirement for data training. In conclusion, the deep-learning based methods always have complicated network structures and need for long-training time. Their enhancing effects depend entirely on the quality of the training set, which is constructed difficultly.
The underwater image enhancement methods based on some spatial/frequency domain transformations or some fusion strategies, which enhance underwater images with higher contrast, richer details, and better visual perception [5], [9], [41], [42]. The well-known methods such as Histogram Equalization (HE) [43], Contrast Limited Adaptive Histogram(CLAHE) [44], Generalized Unsharp Masking (GUM) [45],which are generally regarded as classical contrast-enhancement methods, but always fail in dehazing underwater images. Later, more and more researchers paid attentions to fusion-based methods. Fusion-based methods improve the visual quality of degraded images mainly through correcting color, recovering details, and enhancing contrast. They are always following the algorithms of Laplacian Pyramid and Gaussian Pyramid, and they also made a lot of contributions to dehazing underwater images. The methods proposed by Ancuti et al. [46] and Ancuti et al. [47] are the most influential ones. In [46], authors proposed to reconstruct the underwater image by blending a colorcorrected version and a contrast-enhanced version, with four designed weights into the multi-scale fusing scheme. But the enhanced results perform not well when inputs are influenced by artificial light. The method [47] proposed a strategy which fuses a gamma-corrected version and a sharpening version both from their white-balanced image through multi-scale fusion strategy. It performs better in improving the quality of images than [46], but their single-channel-compensated white-balancing method always fail in removing the color distortion. Even so, both of the two strategies inspired numerous researchers to come up with new ideas, including our 'Two-Step' strategy. In this paper, we propose to adopt an improved white-balancing method to remove the undesired color distortion as a pre-process, and then to utilize a multiscale fusion strategy which fuses five underexposure versions from our white-balanced image and three weights to reconstruct the enhanced image.
The flowchart of our strategy is shown in Fig. 3. And our strategy is detailed introduced in the subsequent section.

III. PROPOSED METHOD
Our underwater single image enhancement approach adopts a two-step strategy consisted of an improved white-balancing method and an artificial multiple underexposure fusion strategy. In our white-balancing method, the optimal colorcompensated approach is determined by the sum of two well-recognized objective evaluation indicators: viz. UCIQE and UIQM. We get an optimal white-balanced version by combining the well-known Gray World assumption and the optimal color-compensated approach. In our artificial multiple underexposure image fusion strategy, first the gammacorrection operation is used to generate five underexposure versions from the white-balanced input. Then we propose to use 'contrast', 'saturation', and 'well-exposedness' as three weights, to be blended into the well-known multi-scale fusing scheme.

A. UNDERWATER WHITE-BALANCING METHOD
Considering that characteristics of light propagation in underwater environment: the received color is influenced by the depth of water. Therefore, scholars proposed some whitebalancing methods which aimed at removing undesired colorcast from degraded images [48].
Nevertheless, white-balanced results obtained from existing methods (Gray Edge [49], Shades of Gray [50], Max RGB [51], Gray World [21], Ancuti et al. [46] and Ancuti et al. [47]) had each limitations, which are shown in Fig. 4. As observed that the white-balancing algorithm proposed by Ancuti et al. [47] removed color distortion better than others. However, their result also suffered from reduction of global-contrast and attenuation of blue-channel, since merely single color-channel is compensated from input.
Due to the low-contrast and undersaturation in underwater imaging, we cannot simply ensure which channel to be compensated (besides the red one). Therefore, we need a practical approach to choose the optimal channel-compensated approach. We are inspired by the solution proposed by Kumar and Bhandari [52], which made an assumption that color-channels of the degrade image can be compensated in 12 approaches. Accounting to underwater long-wavelength light attenuation phenomenon, we propose to merely concentrate on the following five channel-compensated approaches: viz. compensation of red channel from green channel, red from blue, red from green and blue, red and blue from green,  red and green from blue. We suppose to express the compensation equation at every pixel location x as follows under three different conditions: 1) Single red channel is compensated from another single channel e.g. Compensation of red channel from green channel: Compensation of red channel from blue channel: 2) Single red channel is compensated from the rest of two channels e.g.
Compensation of red channel from green and blue channels: 3) Two channels are compensated from the rest one channel e.g. Compensation of red and blue channels from green channel: Compensation of red and green channels from blue channel: In (2)- (6), where I re (x), I ge (x), and I be (x) represents the intensity of compensated channel at pixel location x. I r (x), I g (x), and I b (x) signifies the intensity of original channel at pixel location x respectively, each value of theirs lies in the interval [0, 1], which has been normalized by the upper limit of the dynamic range.Ī r ,Ī g andĪ b denotes the average mean-value of red, green and blue channel respectively. The constant parameter α varies in [0, 1] [53]. Besides, in order to prevent the phenomenon of over compensation, our channelcompensated approach should only affect the regions which have small values of this channel. In other words, regions which have a significant value of the enhanced channel should not be compensated. Then we utilize the Gray World VOLUME 8, 2020 assumption to generate five white-balanced versions based on the channel-compensated ones. Fig. 5 shows an example of the five white-balancing on an underwater image, and the value of constant parameter α is initially set to 1.
Then we utilize UCIQE and UIQM to evaluate the five white-balanced versions. Their formulas are shown as follows: where σ c , con l and µ s denotes standard-deviation of the image chromaticity, the contrast of the image brightness, and the mean-value of the image saturation respectively. The weighted coefficient c 1 , c 2 and c 3 is set to 0.4680, 0.2745 and 0.2576 respectively. The higher the value of UCIQE is, the better the quality of underwater image is [18].
where the weighted coefficient c 1 , c 2 and c 3 is generally set to 0.0282, 0.2953 and 3.5753 respectively. And UICM, UISM, and UIConM denotes underwater image colorfulness measure (UICM), underwater image sharpness measure (UISM), and underwater image contrast measure (UIConM) respectively. The higher the value of UIQM is, the better the quality of underwater image is [19]. Therefore, the optimal white-balanced version should have the maximum value of UCIQE or UIQM. Besides if a whitebalanced image has an optimal value at one quantitative evaluation metric, it is very likely to have an optimal value at the other in the same time. So, the number of optimal white-balanced versions is two at most now. Considering that equal probability distribution probably not generates the best result, we thus vary the value of the constant parameter α in an increment of 0.1 in [0, 1]. We will get 20 white-balanced images totally.
Subsequently, we make a calculation from 20 whitebalanced images with UCIQE and UIQM respectively, and choose the optimal white-balanced approach and the best value of α through the algorithm, which is expressed as follows: where m signifies the white-balanced version varying from 1 to 20. The second equation of (9) signifies value of UCIQE corresponding to the version m in terms of percentage, and the third equation signifies value of UIQM corresponding to the version m in terms of percentage. In the first equation of (9), the m with the highest value of IQA which determines the optimal white-balancing method and the optimal value of α. Fig. 6 shows the comparison of different white-balanced results for underwater images. As can be observed that our white-balancing method effectively enhances contrast of the input, and generates a more vivid version. Fig. 7 shows the comparison of the transmission estimation results based on DCP [28] from related white-balancing methods. The input images and the white-balanced images obtained from [21], [46], [49]- [51], yield poor transmission estimations. Compa-ring to Ancuti et al. [47], our  white-balancing method estimates a more accurate transmission map, especially in details.
After removing the color distortion, we aim at further enhancing contrast and recovering details. The overview of our artificial multiple underexposure image fusion strategy is shown in Fig. 8.

B. ARTIFICIAL MULTIPLE UNDEREXPOSURE FUSION
In this work, we propose a spatially-varying enhancement method to be capable of enhancing contrast and recovering details, which does not require the estimation of transmission and atmosphere light.

1) ARTIFICIAL MULTIPLE UNDEREXPOSURE VERSIONS
The gamma-correction operation is generally used for enhancing or reducing global-contrast of images. In a similar way we utilize the gamma-correction operation to adjust the global intensity on images by a power-function transform, as illustrated in (10):  where both α and γ are the positive constants. Also, the contrast of a given region from I (x) can be simply defined as follows: c ( ) = I max − I min (11) where denotes a given region from image I (x), and I max ∈ max {I (x)|x ∈ }, I min ∈min {I (x)|x ∈ }. As shown in Fig. 9, when γ > 1, the intensities in bright areas are allocated in a wider range after transformation of the image, while intensities in dark areas are mapped to a compressed interval. When γ < 1, the characteristic of global-intensity shows an opposite performance. Since I (x) has been normalized by the upper limit of its dynamic range, which means the value of I (x) varies in [0, 1]. When γ > 1, global contrast of the image would be decreased.  Hence, underexposure versions can be obtained by setting different values of γ in (10). Although, reducing-exposure decreases brightness of the image, the fusion result of multiple underexposure versions can recover details well in underwater images, which can be shown in Fig. 10. We therefore only concentrate on computing the underexposure versions in this paper, which means the source versions Then, the optimal value of K is a problem needed to be solved subsequently. As can be observed in Fig. 10 and Fig. 11, details in darker area of images almost disappear after γ ≥ 3, and details in brighter area of VOLUME 8, 2020 images have been already expressed clearly in the version of γ = 5.
More than that, the fusion results appear almost an identical version in visual perception which are obtained from K = 5 and K = 6. Meanwhile, as shown in Fig. 8, the characteristics of our weight-maps show a same performance: when γ ≥ 3, the 'contrast' and the 'saturation' weight-maps almost make no differences, and when γ = 5 the 'well-exposedness' weight map hardly be recognized. We therefore set K = 5 in this paper, which is based on our assumption that the versions with γ > 5 provide little available information.
In consideration of the contrast reduction in the fusion result from underexposure versions, we also adopt the wellknown CLAHE approach [25], [54] to recover details and enhance contrast.

2) WEIGHTS OF THE FUSION STRATEGY
According to the researches [56] and [57]that a source image can be defined as E k (x) = E k R , E k G , E k B with three colorchannel components. Therefore, the weights blended into multi-scale fusing scheme can be calculated in each channel. In addition, visual quality mainly depends on the contrast, the saturation, and the exposure of the target image [55]. we thus set our weights following the three characteristics, which would assist to generating a fusion result with better visual quality. Fig. 12 shows the comparison of some samples obtained from Galdran et al. [25], Zhu et al. [26], and our artificial multiple underexposure image fusion without white-balancing method. Table 1 shows the corresponding quantitative results obtained with UCIQE and UIQM. As can be observed that fusion results obtained from our approach performed better in terms of contrast, saturation and exposure.
The detailed information is introduced as follows: Contrast Weight W c : higher contrast preserves more details which makes the contrast as an essential indicator to evaluate the visual quality of images [58]. The contrast weight W k c (x) at each pixel x can be measured as the absolute value of the response to a simple Laplacian filter refer to [57], which is expressed as follows: Saturation Weight W s : saturation is an important impact factor, which determines brightness of the image [59]. High brightness always contributes to generating a more vivid version. We calculate the mean value of the R, G and B channels at each pixel x of the image, and then calculate the standard deviation to achieve the saturation weight W k s (x), which is expressed as follows: whereR k (x),Ḡ k (x), andB k (x) denote the mean-value of the red, green and blue color-channel at pixel x in the whitebalanced image.  Well-Exposedness Weight W e : exposure is another determination factor, which decides the amount of information observed by human visual. The image in optimal exposure at all pixels can preserve more detail-and hue-information than any under/over exposure versions of the same scene [59]. The well-exposedness weight at each pixel x of the input image is defined as follows: where the standard deviation σ and the illumination value β are set to 0.25 and 0.5 refer to [60], and c means the corresponding color-channel. The final weight is defined for each underexposure input k by simply combining multiplicatively the contrast weight W c , the saturation weight W s , and the well-exposedness weight W e as follows: 217662 VOLUME 8, 2020

3) MULTI-SCALE FUSION
Image fusion strategy has been largely investigated by numerous scholars. According to references [61]- [64]. At the beginning, researchers got used to a simple structure called 'Naive Fusion', which was defined as follows: where K is the number of input versions E κ (x), and J (x) is the final result.W κ is the defined weights which should be normalized as K κ=1W κ = 1, for the purpose of making sure the intensity of J (x) in range. The result J(x) can be directly obtained from simply multiplying E κ (x) byW κ (x).
Unfortunately, this simple structure always introduced undesirable halos in the fusion result. In order to solve this problem, a popular multi-scale fusing scheme proposed by Burt and Adelson [65] was presented. In this paper, we also utilize this popular multi-scale fusing scheme to get the final result. The process is briefly described as follows: First, the Laplacian pyramid decomposition operation and the Gaussian pyramid decomposition operation are applied to decomposing the input images E κ (x) and the normalized weightsW κ (x) respectively, which decomposes the E κ (x) andW κ (x) into the same number of levels. Then, the Laplacian pyramid and the Gaussian pyramid are fused at each level l to generate the l th level of the Laplacian pyramid of the result.
where G l and L l denote the l th level of the Laplacian pyramid decomposition operation and Gaussian pyramid decomposition operation respectively. And J l (x) denotes the l th level of fusion result. Last, the final result J (x) is obtained by reconstructing the Laplacian pyramid from the bottom level to the top level. Our artificial multiple underexposure image fusion strategy can effectively enhance contrast and recover details which is assist in improving the visual quality of underwater  Fig. 15. The best result is in bold.

TABLE 4.
Average quantitative result of 500 tested images from the uieb database [20]. The best result is in bold.
images. The experimental results and analyses are shown in the section IV.

IV. EXPERIMENTAL RESULTS AND ANALYSIS
In this section, we first prove the improvement of our whitebalancing method through a wide range of analyses. Then we compare the proposed strategy with some state-of-the-art dehazing techniques on underwater images both in subjective and objective evaluations. Last, we introduce some extended applications of our strategy.

A. EVALUATION OF IMPROVED UNDERWATER WHITE-BALANCING METHOD
White-balancing methods are usually applied into removing undesired color cast and enhancing contrast of degraded images. The visual effect of the processed version is the most important subjective evaluation indicator. In order to verify the effectiveness of our white-balancing method, we first employed some images with different degrees of degradation from the RUIE database [66], to compare our whitebalancing method with different white-balancing methods The results of UDCP [31]. (c) The results of Ancuti et al. [46]. (d) The results of L2UWE [70]. (e) The results of FUSION [47]. (f) The results of 'Two-Step' approach [71]. (g) The results of UWCNN [40]. (h) The results of our strategy. (Gray Edge [49], Max RGB [51], Shades of Gray [50], Gray World [21], Ancuti et al. [46] and Ancuti et al. [47]).    observed that our white-balancing method performed better than the others, whether in effectively enhancing contrast or accurately correcting color distortion. In order to further prove the effectiveness of our whitebalancing method, we adopted some images from the TURBID database [67] and the UIEB database [20] to make a quantitative comparison of related white-balancing approaches. Since the two databases both have a wellrecognized reference version for each image, we utilized the structural similarity index (SSIM) [68] and the patch-based contrast quality index (PCQI) [69] to evaluate the results. 1) SSIM [68] ranked 1st among full-reference metrics in computation of the PLCC and the SROCC [20]. The higher the value of the SSIM is, the better the white balance effect is. 2) PCQI [69] is a general-purpose image contrast assessment. The higher the value is, the result the better is.
Although the methods proposed by Ancuti et al [46] and Ancuti et al. [47], which corrected the color distortion more accurately than the classical ones [21], [49]- [51]. Our whitebalancing method achieved the best visual perception version, which is shown in Fig. 14. The quantitative results also proved the improvement of our white-balancing method, which are shown in Table 2 B. STRATEGY EVALUATION Our 'Two-Step' strategy improves the visual quality of underwater images primarily through correcting color distortion, enhancing contrast, and recovering details. We adopted images from the UIEB database and our diving experiments, to exhibit the comparison of results obtained from our strategy and other state-of-the-art underwater image dehazing techniques (UDCP [31], Ancuti et al. [46], L2UWE [70], Ancuti et al. [47], another Two-Step approach proposed by Fu et al. [71], and the deep-learning based method UWCNN [40]). As space limited, we just put the comparison of 10 samples from the UIEB database in Fig. 15. Table 3 provides the associated quantitative results obtained with three well-recognized performance metrics: UCIQE, UIQM, and PCQI.
As shown in Fig. 15, although UDCP [31] effectively improved the visibility of high-quality inputs, it could not work for images with severe color-distortion. L2UWE [70] and Ancuti et al. [46] had the same disadvantages. Besides, L2UWE [70] always resulted in serious salt-noise, which seriously influenced the clarity of enhanced images. Even though the Two-Step strategy proposed by Fu et al. [71] indeed removed color distortion, it also caused some undesired shadows in dark areas of enhanced results. Since the corresponding dataset was short of abundant training, the UWCNN [40] also performed not well in our comparison. Ancuti et al. [47] performed better than the previously mentioned methods, whereas images enhanced by our strategy had the best visual quality whether in removing color distortion, enhancing contrast, or recovering details.
It can be seen in Table 3 that our strategy obtained the highest or the second-high PCQI, UIQM and UCIQE values compared with other methods. Besides, we made a quantitative comparison on the average values of related methods on 500 images from the UIEB database. Table 4 shows the result, it can be seen that our strategy obtained the highest VOLUME 8, 2020  Fig. 19. The best result is in bold.
PCQI, UIQM and UCIQE average values compared with other related methods.
Besides, we also employed images and video frames from our diving experiments to make qualitative and quantitative comparison of competitive underwater dehazing methods. Fig. 16 shows some samples of enhanced images obtained from our strategy and related approaches.
The corresponding quantitative result is shown in Table 5. As can be seen that our strategy performed better both in the quantitative and qualitative evaluations.
Overall, we conclude that our strategy can result in better perceptual quality, with significant improvement in enhancing contrast, removing color distortion, and recovering details, compared with some state-of-the-art underwater dehazing approaches. Besides, our approach is more robust too.
However, the main limitation of our strategy has been observed through our experiments: the red channel may be compensated too much on the image with serious uneven illumination. We will optimize our white-balancing algorithm for the purpose of solving this problem in our future work.

C. EXTENDED APPLICATIONS
Although the introduced strategy in this paper is specifically designed for improving the visual quality of underwater images, which is also appropriate for enhancing the degraded images taken in some situations such as natural, low-light, and foggy. Fig. 17 and Fig. 18 show some samples of enhanced results obtained from our strategy. Table 6 and  Table 7 show the corresponding results obtained from he Average Gradient (AG) [72] and UCIQE. AG is an indicator which can measure the clarity of the target image, and the higher the value is, the better the clarity of image is.
It can be observed that the degraded images get an obvious improvement through our strategy in correcting colordistortion, enhancing contrast, and preserving details. The quantitative evaluation also shows the significant improvement of our strategy.
Besides, we also found that our strategy is suitable for Local feature points matching. We employed the SIFT [73] operator to compare the number of matched-pairs of keypoints from related images. Fig. 19 shows the comparison of matched results of original images, images enhanced by Ancuti et al. [47], and images enhanced by our strategy. Table 8 shows the relevant results. As can be seen that, our strategy increased the number of matched-pairs both in the high-quality and low-quality inputs.

V. CONCLUSION
This paper introduced an enhancement approach to clarify underwater single image. The proposed strategy is constitutive of two steps: an improved white-balancing method and an artificial multiple underexposure image fusion strategy. As introduced in Section IV, our strategy effectively improves the visual quality of underwater images with different degrees of degradation, which does not require any dedicated devices or additional information more than the native single image. Furthermore, we have found that our strategy can enhance some images taken in the natural, low-light, and fogged situations. Besides, it also can be suitable for increasing the number of matched-pairs in local feature points matching on underwater images.
Although, our strategy has obtained good performance, it also has limitations: our white-balancing method may compensate the red channel too much on images with severer uneven-illumination. We intend to continue our research for solving the limitation in future work.