Underwater Image Enhancement Method Based on Dynamic Heterogeneous Feature Fusion Neural Network

In recent years, signal and image processing based on fractional calculus has attracted extensive attention. Aiming at the serious problem of gray-scale loss in the existing pseudo color methods in high gray-scale image enhancement, a pseudo color enhancement algorithm suitable for Dynamic heterogeneous feature fusion neural network is proposed, and the traditional jet, HSV and rainbow coding are improved. Firstly, bit depth quantization is performed on the high-level gray image; Secondly, color enhancement is realized by using the constructed high gray-scale enhancement algorithm; Then, combined with the convolution neural network, the compact learning method is used to extract the features of the multi-scale image, and the jump connection is used to prevent the gradient dispersion and overcome the fog blur effect of the underwater image The style cost function is used to learn the correlation between various channels of color image, improve the color correction ability of the model, and overcome the problem of color distortion of underwater image. Experimental results show that compared with traditional image enhancement methods, the proposed method has better comprehensive performance in subjective vision and objective indicators, and has advantages in dealing with underwater image enhancement. While improving the brightness of the image, the problem of color distortion and brightness blocking of the enhanced image is solved. The texture information of the image is effectively restored. The brightness distribution of the enhanced image can well restore the brightness distribution of the real shooting environment, which verifies that the algorithm has higher robustness.


I. INTRODUCTION
Compared with traditional 2D images, light field images record the direction and position information of incident light in 3D scenes, and have been widely used in challenging computer vision tasks, such as new viewpoint generation [1], reflection removal [2], target detection [3], 3D reconstruction [4], [5]. However, the images collected under low illumination conditions are prone to problems such as low contrast, blurred details and noise, which lead to the degradation of image quality. The research of low illumination image The associate editor coordinating the review of this manuscript and approving it for publication was Kostas Kolomvatsos . enhancement algorithm has always been a hot spot in the field of computer vision.
In recent years, relevant scholars have carried out a lot of research on color enhancement of gray-scale images in different fields, and the commonly used methods mainly include density layering method, gray-scale color transformation method, pixel self transformation method, rainbow coding, metal coding and pseudo color enhancement algorithm based on frequency domain. Among them, Yan et al. Optimized the poor color enhancement effect of traditional algorithms in low contrast images by improving the gray-scale transformation mapping function, which has good application value [6]; Wang et al. Used pseudo color algorithm to improve the visibility of microcalcifications on mammograms [7]; Yang et al. Provided an enhancement method for visual enhancement of high dynamic range and low contrast images by optimizing the extraction of the equal contrast color space developed by mtuci [8]. Luwanshun et al. Improved the detection accuracy of COVID-19 infected pneumonia by constructing pseudo color images [9]; Based on the improved pseudo color image enhancement, Zhang et al. Studied the rough point cloud image and crack space image. The numerical results show that the method is effective for studying and understanding the fracture characteristics of rocks [10]; Chiang et al. Used HSV, hot and jet to perform pseudo color processing on cochlear images, and realized robust acoustic event recognition through feature extraction of sound signals [11]; Relevant scholars have also applied the pseudo color enhancement algorithm of gray image to medical image fusion and remote sensing image processing. The original palmprint image usually has some problems, such as the texture is not clear, the palmprint image has an indefinite rotation angle, and the image has noise. Therefore, it is necessary to enhance the original palmprint image acquired by the device and extract the region of interest (ROI) [12]. The existing palmprint pretreatment methods generally have some problems, such as high time cost and dependency between methods. With the rapid development of neural network in recent years, neural network has achieved great success in traffic vehicles, gait recognition, license plate recognition and other fields. Neural network is a mathematical model that simulates biological neural network for information processing. Its purpose is to simulate some mechanisms and mechanisms of the brain to achieve some specific functions [13]. It has a high degree of parallel structure and parallel implementation ability, can give full play to the high-speed computing ability of the computer, and can quickly find the optimal solution.
Based on the transform domain image enhancement method, the image information is transformed into the frequency domain space, and the image is enhanced by changing the components of the image with different frequencies. This kind of algorithm mainly uses low-pass filter, high pass filter and homomorphic filter to enhance the image. Tian et al. [14] adjusted multi-scale wavelet coefficients by using the contrast of visual statistical characteristics to correct the global and local contrast of the image. The color of the image enhanced by this algorithm is more in line with human visual characteristics. Shahan et al. [15] used the brightness masking characteristics and contrast masking gradient characteristics of HSV model to enhance the image contrast and adjust the image brightness in the fixed wave transform domain and the dual tree composite wave transform domain by using the nonlinear contrast mapping coefficient. The typical frequency domain enhancement algorithm is also the Retinax based enhancement algorithm proposed by land Peng et al. [16]. In order to make the brightness of the enhanced image more consistent with human vision, the algorithm simulates the human visual system perception model, separates the optical signal received by the human eye into incoming light and reflected light through transform domain filtering, and enhances the image quality by reducing the incident light and enhancing the reflected light that transmits the real information of the object. After that, Wang et al. [17] evaluated the natural brightness of the image by calculating the sequence error of image brightness, and balanced the incident light image and reflected light image decomposed by the image by using the double logarithmic transformation. This algorithm can effectively enhance the detail information of the image. Fu et al. [18] proved that the logarithmic transformation is not suitable to be directly used as the regular term by analyzing the characteristics of the logarithmic transformation, and then used the weighted variational model to estimate the incident light image and the reflected light image of the image to enhance the image brightness. This kind of algorithm can enhance the details of the image well, but the enhancement process is complex, and the enhancement algorithm based on deep learning can solve this problem. VOLUME 10, 2022 Based on the above analysis, aiming at the limitations of existing low exposure image enhancement methods, this paper proposes a progressive dual network low exposure image enhancement model based on Retinex theory (as shown in Figure 1. The network takes the low exposure image as the input, uses convolution kernels of different scales for feature extraction, and finally learns the illumination map in the Retinex model. Then, the illumination map is substituted into the Retinex model to calculate the brightness enhanced image. Then, aiming at the noise amplification problem in the enhancement process, the enhanced image is passed through an image denoising network to obtain the final enhancement result The innovative work of this paper is as follows: (1) A progressive dual network low exposure image enhancement model is proposed. Aiming at the problems of low brightness and noise amplification in low exposure image enhancement, the whole model uses the progressive idea to design image enhancement.
(2) In the two modules of the image enhancement model, the progressive idea is used to construct its network framework, and the image restoration process from coarse to fine is realized to achieve better enhancement results.
(3) Considering the reversibility of image degradation theory, a bi-directional constraint loss function is proposed for network learning. The loss is calculated from the positive and negative directions of image degradation model to make the learned information more complete.

A. DYNAMIC HETEROGENEOUS FEATURE FUSION NEURAL NETWORK IMAGING MODEL
According to Jaffe's imaging model [19], the imaging of the underwater image in the camera can be regarded as the superposition of three components, namely, the direct attenuation component, the backscattering component and the forward scattering component. The direct attenuation component is the light reflected by the object and not scattered to the imaging equipment in the process of propagation; Part of Due to the low visibility of underwater shooting, the distance between the scene and the camera is generally small, and the scattering of the reflected light propagation process of the object is weak, so the forward scattering part can be ignored To sum up, the underwater image imaging model under natural light can be expressed as where: I C is the image captured by the camera; D C is the direct attenuation component; B C is the backscattered component c = {R, G, B} R, G, B Each represents three color channels of a color image D c B c β D c β B c Juan et al. [15], [16] found through experiments that in the underwater imaging model, and are respectively affected by two distinct coefficients and effects Expand equation (1) to the following form [20]: where: Z is the distance between the camera and the object: B ∞ c Is the light received by the camera in the waterless scene corresponding to the background light J C is , Reflect the correlation between coefficient and, ρ Is the reflectance of the scene, e is the ambient light spectrum, and S C is the spectral response of the camera Shall, B and β They are the physical scattering coefficient and the light wave attenuation coefficient of the water body, which are functions of the light wavelength.
In the past research [21], it was generally assumed, but Yang et al. [22] proved that they were unequal and non constant, and further explained the correlation between and as in (3) and (4), shown at the bottom of the page, where: λ 1 = 400nm, λ 2 = 700nm, S c (λ), ρ(λ) Respectively represent the camera spectral response and scene reflectance at the wavelength,, E(d, λ) Ambient light spectrum with wavelength at depth of water The expression of the background light in equation (2) is as follows: where: B ∞ c (λ), Denotes wavelength respectively λ Physical scattering coefficient and light wave attenuation coefficient of water body under.

B. SUBGRAPH EVALUATION OF BRIGHTNESS ENHANCEMENT
The exposure fusion framework can enhance the image brightness in different regions [23]. Its principle is to use the exposure mapping formula to divide the image into two parts: foreground and background, and enhance the image brightness of these two parts to varying degrees. The multi threshold block enhancement algorithm is improved on the basis of the exposure fusion framework. Since the overexposed area contains only a small amount of image information, enhancing the brightness of the area will not greatly improve the image quality, while enhancing the brightness of the normal exposed area will cause the overexposure problem of the image. Therefore, in the brightness enhancement phase, the algorithm will only enhance the brightness of the underexposed region, and focus on enhancing the underexposed region rich in detail. The exposure mapping formula proposed in the exposure fusion framework is: where, w represents the brightness weight matrix of the image, Y µ represents the scene illumination image W , µ Represents the exposure intensity. Because y is fixed and the brightness of the image is proportional to W, the brightness and exposure intensity of the image µ Proportional, and µ Operators can be modified artificially, so operators can be designed µ To simplify the exposure fusion framework. The input image can be divided into two types: one is the image containing the normal exposure area, and the other is the image containing only the underexposed area. If there is no normal exposure area in the input image, the algorithm will adjust the brightness of the underexposed area subgraphs numbered 0 to N by default with the average gray value of the standard exposure. In order to better solve the fine lines in low illumination images. For the problem that the physical features are difficult to be recognized, the algorithm focuses more on enhancing the brightness of the sub graph with rich detail information [24], that is, the brightness enhancement of the sub graph area is proportional to the richness of the detail information it contains, so as to improve the identifiability of these small details. Therefore, before improving the brightness of the image, it is necessary to the complexity of the graph is evaluated and used as the standard for improving the brightness of each sub graph. In order to evaluate the complexity of each subgraph from local information and global information, a convolutional neural network for complexity evaluation is designed with reference to the traditional lenet-5 [25] network. There are two kinds of parameters that need to be set in the network. The other is the values that change during the training process, such as the values of the elements that make up the convolution kernel. These values can be randomly selected during the first convolution operation.
The neural network will constantly correct these parameters according to the errors obtained from the error function during the convolution process. The complexity evaluation convolutional neural network of the algorithm is composed of three convolution layers, two pooling layers and one full connection layer, as shown in Figure 2. As shown in Figure 2, the framework of the network is as follows: 1) Use 6 groups of 5 × 5 convolution kernel, with step size of 1 and filling of 0, is regularized to 32 × 32, take relu as the activation function, and generate 6 groups of 28 × 28.
2) Utilization 2 × 2 pooling unit (maximum pooling) of 2 samples the characteristic map input from the upper layer in the way of step size of 2 and filling of 0.
3) Volution3 in order to extract the local and global information of each subgraph, first, in 5 × 5. Select two 5 in turn from the convolution kernel 5 × 5 convolution check 14 × 14 feature map is extracted with step size of 1 and filling of 0, and then the results are added, plus an offset, and sigmoid is taken to get 10 × 10. 4) Pass 2 × 2 pooling units (maximum pooling) for 16 10 × The characteristic map of 10 is sampled with step size of 2 and filling of 0, and 16 5 are obtained 5 × 5. 5) Using the full connection layer, obtained by convolution 5 × 16 vector is converted to scalar C, which is used as the complexity of the subgraph. The neural network reduces the dimension of the collected feature information in the process of pooling, so that the network will not misjudge the noise of the image as the information of the image itself when collecting image information, and then improve the accuracy of the algorithm to judge the information complexity of the sub image.
In order to preserve the original features of the image as much as possible when enhancing the brightness of the image, a sub image brightness enhancement formula is designed as: where L xj⊕ represents the brightness (gray value) of the X pixel after the brightness of the j subgraph is increased, lxj represents the brightness of the X pixel before the brightness of the j subgraph is increased (the variable with ''⊕'' is the variable after the brightness is increased), C J is the complexity of the J subgraph calculated, Equation (7) improves the overall brightness of the image by linearly stretching the brightness characteristic histogram of each sub image. When the input image is converted into a histogram, the spatial information of the image will be lost. Therefore, the complexity C of the spatial information of each sub image is introduced when processing the image histogram. It can be seen from equation (7) that the brightness of each sub image after enhancement is proportional to the complexity C of the sub image.

C. DYNAMIC HETEROGENEOUS FEATURE FUSION NEURAL NETWORK
To better restore the details of low illumination images and suppress noise and color, this paper proposes an ard-GAN method based on attention, RDN and GAN structures This method is composed of a generation network and a discrimination network. The generation network includes a global illumination estimation module, a convolution residual module and a residual density module based on channel attention, which are used to guide subsequent modules to enhance illumination and extract shallow and deep features, as shown in Figure 3 (a) Dncnn is used as a discriminant network to further improve the noise suppression effect, and also has the function of noise reduction. The structure of giem is shown in Figure 3 (b) The global exposure attention map adaptively weights each position according to the exposure intensity of different positions of the image Since the noise in the low illumination image is mostly distributed in the darker areas, the global exposure attention map can not only guide the subsequent modules to enhance the illumination, but also suppress the noise Giem adopts spatial attention skip connection (SAS) based on spatial attention mechanism to highlight. The key information in the code stage, to make up for the loss of information and accurately locate the underexposed area, is shown in Figure 3 (d) Firstly, the low illumination image x is sent into FBEM (x) to obtain the illumination distribution map xbem, in which the network FBEM (x) adopts the jumping connection based on spatial attention mechanism Secondly, the illumination distribution map xbem and the low illumination input map X are sent into the mask to calculate the global exposure attention map xmask. The channel attention mechanism is used to model the dependency between xmask channels, so as to improve the representation ability of important channels and suppress irrelevant features As shown in equation (8), max (•) returns the maximum value of the three color channels CRM is used to extract the shallow features in the input image. It consists of three CRBs and one shallow feature fusion block CRB contains 1 × 1. Firstly, since the relu activation function will uniformly compress the non positive input to 0 and cannot be recovered, it will be placed in the first place of the CRB to compress the input characteristics; Next, use 1 × 1 convolution to encode information between channels; again. It is an average pooling layer, which can reduce the estimated value caused by the limited neighborhood size Variance problem; Introducing channel expansion coefficient into convolution layers ρ [26], used to expand the number of channels, as shown in equation (9), conv (•) represents convolution operation, ρ Indicates channel expansion coefficient.

III. NETWORK IMPROVEMENT A. PROGRESSIVE IMAGE ENHANCEMENT METHOD
In order to solve the problems of detail loss and noise amplification after image enhancement in existing methods, this paper proposes a progressive dual network low exposure image enhancement model, which includes two sub networks: image brightness enhancement module and image denoising module. The overall framework is shown in Figure 1. We combine network design with the physical model of image enhancement. Firstly, we use the progressive idea to enhance the brightness of the low exposure image, and then add the image denoising process to the brightness enhancement results to solve the problem of noise enhancement in the enhanced image. Considering that the brightness enhancement in the real scene is a process from dark to light, this paper proposes a progressive image enhancement module, which improves the brightness and color of the low exposure image twice from coarse to fine to complete the process of image brightness enhancement. The implementation process of the network module is described as follows.
As shown in the image enhancement module (dotted box at the upper end), the input of the module is a low exposure image, the input of the two subframes in the module is output as illumination map L, and the input and output are red, green and blue channels. The module includes two steps: initial brightness reconstruction and brightness enhancement. The framework of the first step consists of six convolution layers. The first two layers of the network use convolution to realize the down sampling of feature mapping. This operation can ensure the effect of down sampling and avoid the loss of information caused by down sampling. By downsampling to reduce the size of the characteristic image, the receptive field of the convolution kernel can be extended, so that only 3 × 3 the convolution kernel of information within the range can learn 7 after two down sampling × 7 information within the scope. In addition, in order to ensure that the final illumination map has the same size as the input image, a two-step deconvolution operation is used in the network to increase the size and facilitate the calculation of gradient descent. The network framework for brightness enhancement in the second step is similar to that in the first step. The purpose of this step is to refine the enhanced image. This part of the network has only four volume layers. By reducing the network training parameters and improving the network performance, it is proved that using four layer convolution is better than using six layer convolution directly. In order to ensure the effectiveness of the network and make better use of the information of the original image, the module first mosaics the preliminary enhancement results with the original image, and then uses two convolution and deconvolution operations to realize one-step image enhancement. In these convolution layers, each layer contains two types of parameters: weight and deviation. The calculation method is as follows: where F is the characteristic graph obtained after convolution, which is the weight and deviation respectively,: R is the input, and the symbol '' represents convolution calculation. ω * x in the whole framework, there is an activation function relu behind each convolution layer, which is defined as follows: where F(x) is the result of convolution and R(x) is the result of relu function. The purpose of relu activation function is to save effective information and delete invalid information to speed up training. In addition to insufficient illumination, low exposure images are also affected by various noises in the imaging process. In the case of low brightness, it is difficult to find noise due to the low contrast of the image. However, after the low exposure image is enhanced, the noise will also be enhanced. Therefore, in order to remove noise, this paper adds a noise removal module after the image enhancement module. The network design idea of this module is similar to that of the enhancement module. The idea of progressive learning is used to learn the noisy image twice from coarse to fine. The purpose of the network module is to learn the noise components in the image, and the noise in the image is usually considered as an additive noise. Therefore, subtraction is used at the back end of the two sub blocks to subtract the enhanced image from the noise components learned by the network to obtain the denoised image. Traditional image enhancement methods will perform soft threshold and quantization on the components obtained after wavelet decomposition, which is equivalent to setting convolution kernel manually for specific tasks to convolute the image However, for the complex and changeable underwater environment, manual methods are obviously too complicated, and for example, the specific gravity of the scattering component in the low-frequency information of underwater images can only be estimated by statistical methods or experience Mwcnn [27] combines wavelet packettransform with CNN to perform wavelet decomposition on the input image. After each layer of decomposition, all subband images are used as the input of a CNN module [28], and the learning compact feature representation is used as the input of the next layer of wavelet decomposition. The pycnocline connection introduced into the U-shaped network can make full use of the structural information in the encoder to achieve better de aliasing and reconstruction results. This study proposes an improved underwater image enhancement model based on mwcnn [29], as shown in Figure 4. Compared with mwcnn, its structure is simpler In the encoder section, 256 × 256 pixel RGB image as input, DWT is used to decompose the original image layer by layer, and through 3 × 3 Convolution learns compact feature representation for subband images at all levels, in which the changes in the number and size of feature channels are marked in the figure After each successive convolution operation, relu is connected as the activation layer After each DWT operation, the number of feature channels increases by 4 times [30], and CNN will not expand the feature vector channels except for the first layer; In the decoder part, IWT is used to return the potential vector to the original input size in the order from high dimension to low dimension. The model also adopts the thermocline connection, which is different from mwcnn. After each IWT operation, the output tensor is spliced with the tensor of the same scale on the encoder side instead of adding, and the spliced tensor also adopts the same continuous convolution and relu operation as the encoder part [31].

C. INFLUENCE OF FRACTIONAL CALCULUS OPERATOR ORDER ON SIGNAL
For square integrable energy function f (x) ∈ L 2 (R), Its Fourier transform is: Fractional order f (x)v(v ∈ R + ) differential:  Furthermore, from the Fourier transform properties: In order to further analyze the influence of order on signal analysis in fractional calculus, figures 5 and 6 are respectively based onα v (ω) = |ω| v The amplitude frequency characteristic curves of the fractional differential operator of V ∈ (0,3) and the fractional integral operator of V ∈ (-3,0) are plotted, where the horizontal axis represents the frequency ω,Vertical ax is represents amplitudeα v (ω) • As can be seen from Figure 5, for the high-frequency part (1) of the signal, the fractional differential operator can enhance the signal, and the amplitude also increases with the increase of the order to a certain extent. For example, the high-frequency amplitude of the fractional differential operator belonging to the interval order (1,2) and (2,3) is significantly higher than that of the fractional differential operator belonging to the interval order (0,1) [32]; At the same time, for the low-frequency part (1) of the signal, the fractional differential operator has a certain nonlinear reduction effect on the signal, and with the increase of the order, the reduction amplitude is reduced to a certain extent, In general, the image processing process can more effectively distinguish high-frequency information and protect low-frequency information [33], which lays a foundation for improving the quality of image processing such as image enhancement, image segmentation and image recognition Similarly, the amplitude frequency characteristic curve of fractional integral operator in Fig. 6 Corresponding conclusions can also be obtained This operator is much more sensitive to the low-frequency information of the signal than the fractional differential operator. At the same time, it can finely process the high-frequency signal nonlinearly, thus providing an effective tool for image processing applications such as image denoising [34].

D. DETAIL ENHANCEMENT
The algorithm improves the brightness of the image in the brightness enhancement stage [35], but the image after brightness enhancement still has the problems of high noise and unclear detail information. Therefore, in the detail enhancement stage, the guided filtering algorithm is used to filter out the image noise and enhance the image contour information. At the same time, the anti sharpening mask algorithm [36] is introduced to enhance the texture information of the image. The output image is constructed based on the local linear relationship between the image and the original image, and the least square is made with the original image [37], so that the output image can approach the original image as much as possible while enhancing the contour information of the original image. When the pilot graph is the same as the original graph, the algorithm can realize the smooth filtering of the original graph. The guiding filtering formula is: where, ω K represents a filter window, G represents a pixel in the image, q represents the output image, and I g represents the guide image. It can be seen from equation (15) that q and I have a linear relationship. Equation (16) describes that the similarity between the output image and the original image can be determined by A K and B K .
where, y g represents the value of G pixel in the input image in the window, and Ig represents the value of G pixel in the guide image in the window. In order to prevent a k from being too large, regularization parameters are introduced ε, When the original image is selected as the guide image ε Will be discarded. It can be seen from equation (16) that appropriate a k and b k can make the output image better restore the input image [38]. After simplification, the expressions of A K and B K are respectively where, µ K and σ 2 k Respectively means that I ω The mean and variance within K| ω| Yes ω K contains the number of pixels,ȳ k 1 |ω| g∈ω k y g Yg means y in ω K. In order to filter the noise of the image and enhance the contour information of the image at the same time, it is necessary to conduct two guided filtering processes on the original image. The first process filters the image noise. Both the guided image and the image to be filtered are image v [39]. The contour information of the image is enhanced by the second filtering. The guide image is the result of the first filtering, and the image to be filtered is image v. Image C with rich contour information and less noise is obtained after two filtering processes [40]. Finally, the unsharp mask algorithm is introduced, the low-pass filter is used to obtain the low-frequency information of the original image.

A. GRID TRAINING DETAILS AND QUANTITATIVE ANALYSIS
The model in this paper is implemented using the pytoch framework. It is trained on two nvidiageforcertx3090gpus.
For reference image quality assessment, peak signal-tonoise ratio (PSNR) and structural similarity (SSIM) are used to compare the performance of different methods on l3f dataset. For unreferenced image quality evaluation, natural image quality evaluator and blind/referencelessimage spatial quality evaluator (BRISQUE) are used to compare the performance of different methods on the data set proposed in this paper. The niqe value is used to evaluate the perceived quality of the image, and the BRISQUE value is used to evaluate the quality of the image in the spatial domain. The lower the niqe and BRISQUE values, the higher the quality of the image. Quantitative analysis from the central viewpoint. The results are shown in Table 1.
It can be seen from the experimental results that in the two sets of data sets, the method Ours proposed in this paper has achieved the best results in both the reference image quality evaluation and the non reference image quality evaluation.

B. ILLUMINANCE ENHANCEMENT MODEL EVALUATION
In order to evaluate this method, the synthetic low illumination image test set and the real low illumination image test set will be tested and compared The methods in this paper are compared with common low illumination image enhancement methods, including learning based gladnet, mbllen, llcnn and msrnet methods and non learning based life, dehaze and CLAHE methods.
It can be seen from Figure 7 that llcnn, mbllen and msrnet all have poor noise reduction effect to varying degrees It can be seen from the detailed figure in Figure 2 that msrnet has poor noise reduction effect, and the overall image is green with obvious artifacts Llcnn and mbllen recovered the details well, but the illumination was not improved enough Gladnet    has good noise reduction effect, but the image is also green as a whole The ard-GAN proposed in this paper is the closest to the real image in terms of visual effect and has the best objective evaluation index.
In order to evaluate the low illumination enhancement effect of ard-GAN in real scenes, multiple images are selected from exdark [16], sice [17] and tid2013 [18] data sets for comparison of enhancement effects, as shown in Figure 8 Life performs well in brightness and contrast, but the color is over enhanced and the details are not clear enough The brightness enhancement of dehaze image is insufficient, especially in the backlight area with uneven illumination The enhancement effect of CLAHE method is generally dark and the color restoration is poor, but the image enhancement effect is better under the condition of backlight It can be seen from Fig. 8(b) that the traditional method can better preserve the brightness, color and other information of the image background when enhancing the image of the backlight scene, The gladnet method performs well in color restoration, but it can be seen from figure 8(b) that artifacts occur when the method enhances darker images and the background information cannot be well preserved for backlight images.
It can be seen from Figure 9 that under the condition of extremely weak light, MSR can significantly improve the brightness of the original image, but there is an overall over enhancement phenomenon. The overall performance of STAR is unnatural. The performance of the edges is vague, and there is a certain degree of light dark contrast, which is reflected in the bright walls and dark corners near the air conditioner. In contrast, the enhancement result of Ours algorithm has moderate brightness and prominent edge details, which is closer to the real scene.

C. ALGORITHM EFFICIENCY
In order to compare the efficiency of different algorithms, the size of 200 × 200 pixels, 300 × 300 pixels and 400 × 400 pixel image for testing. Each algorithm is tested 10 times on images of different sizes. The average running efficiency of different algorithms is shown in Figure 10. As can be seen from Figure 9, life has the fastest average running time, followed by MSR and msrcr. The average time consumption of this method is equivalent to star and wvm, slightly higher than FBE and srie, and lower than RRM, lr3m and LFV.

V. CONCLUSION
Aiming at the problems of low illumination images, this paper proposes a low illumination enhancement method based on attention mechanism, residual dense blocks and generation of countermeasure Firstly, the method uses giem to generate a global exposure attention map to guide the subsequent modules to enhance the illumination; Secondly, different levels of features extracted by CRM and cardm are fused to obtain more detailed information; A generative countermeasure network is proposed to transform the underwater image into the image of underwater environment as truly as possible. The experiment shows that the underwater image synthesized by the model can effectively train the underwater image enhancement model, and provides a new idea for expanding the underwater image data set; A novel underwater image enhancement model based on mwcnn and style cost function is proposed. The model fully considers the problem of feature loss of deep convolution network, and puts forward a new solution to the problem of underwater image color distortion.