A pulse-number-adjustable MSPCNN and its image enhancement application

Pulse-coupled neural network (PCNN) aims to control neuronal firing state automatically and complete related image processing tasks. This paper presents a pulse-number-adjustable MSPCNN model (PNA-MSPCNN) that can automatically acquire the firing times and the firing frequency of each neuron. Hereinto, synaptic weight matrix Wijkl and decay factor α will generate an interaction value to determine the final calculation result of the internal activity U. Dynamic threshold amplitude V, step function Q, and auxiliary parameter P can precisely adjust the variation ranges of the dynamic threshold E. Additionally, we propose a low-light image enhancement method based on the above PNA-MSPCNN and a modified low-light image enhancement (LIME). The proposed LIME algorithm focuses mainly on the parameter setting method of weight matrix Wmq, which will bring further improvement of testing image contrast. Experimental results demonstrate that our proposed method achieves better low-light image enhancement performances, compared to prevalent image enhancement methods, including SSIM of 0.8725, AMBE of 0.0550, MSE of 0.0092, and PSNR of 45.7764.


I. INTRODUCTION
Based on Eckhorn's cortical model [1][2], Johnson et al. proposed a pulse coupled neural network (PCNN) [3] that simulates the internal behaviors of cats' visual cortex neurons and considers the interactions between central and surrounding neurons. The proposed PCNN adopted a pulse generator to simulate the refractory period of cells and was soon recognized as a bio-inspired neural network to study the synchronous pulse activities of related neurons [4][5].
In the past few years, PCNN had been widely applied in image segmentation [6][7], image fusion [8][9], image denoising [10][11], image quantization [12][13], and image enhancement [14][15]. This is because PCNN has several main properties of biological neurons, such as automatic wave, variable thresholding, nonlinear modulation, synchronous pulse release, and capturing behavior. Further, prevalent PCNN algorithms are suitable for image processing [16]. However, the modified PCNN models cannot reasonably control neuronal firing frequency. Related image processing methods still need to be explored on the subsequent research.
The term "image enhancement" refers to the process of enhancing image contrast and visual quality. Observers can see more detailed information in improved photos than in raw images. Image enhancement methods can be categorized into three classifications: histogram equalization methods (HE) [17][18][19][20], Retinex-based methods [21][22][23], and deep learning methods [24][25][26]. Although HE has low processing costs, it is unable to selectively enhance local picture details, resulting in unrealistic results. Based on Retinex theory, a number of methodologies have been developed, including simultaneous reflectance and illumination estimation (SRIE) [21], a probabilistic method for image enhancement (PIE) [22], and low-light image enhancement (LIME) [23]. Recent contributions based on Retinex theory include a hybrid L2-LP variational model [27] and several improved models based on low-rank regularized Retinex model [28] [29] [30]. There are also works that focus on pure luminance fix. For example, reference [31] proposed a novel method that uses dark channel prior (DCP) and luminance stretching to dehaze images.
The fast development of deep learning-based theories had exhibited their great potentials in computer vision tasks. In recent years, popular image enhancement methods [25][32] [33][34] [35] [36][37] [38] based on deep learning obtained state-of-the-art performances. Low-light image enhancement methods depend on unsupervised methods [39] as well. Recently, attention mechanism brought a new trend and is applied to image enhancement aspect [40].
The final achievement of image enhancement has two key factors. The first factor is the input image of the proposed method, which always brings low image contrast and image definition. The second factor is the computational complexity of the proposed method. According to the above theoretical analysis, we propose an image enhancement method combining our presented PNA-MSPCNN and the modified LIME.
On the research, the proposed PNA-MSPCNN has several important neuron features, including fire synchronization and capture behavior. The PNA-MSPCNN can reduce the pixel intensity differences between neighboring neurons, and acquire an input image of the modified LIME, which has higher image contrast and satisfies the requirements of the human visual system. Moreover, the modified LIME can provide a modified weight matrix Wmq and bring lower computational complexity. These guarantees that our image enhancement method has the better performances than other competitive methods.
For the proposed image enhancement method, we design a PNA-MSPCNN model derived from the MSPCNN to control the firing times and firing frequency of the neurons. This model eliminates the linking strength , highlights the synaptic weight matrix Wijkl, and can effectively control the pulse burst number of any fired neuron. We also adopt a step function Q and an auxiliary parameter P to control the variations of the dynamic threshold at each iteration. The PNA-MSPCNN can adjust pulse numbers accumulated by neuron stimulations and control the neuronal firing results more effectively. This can provide a better image input with smaller pixel differences based on human visual system for later related image enhancement methods. We also filter some unimportant image feature information in this step. The iteration results of the PNA-MSPCNN will be used as the initial illumination map of the proposed image enhancement method. In addition, we propose a modified LIME method that optimizes the achievement steps for image enhancement using a modified weight matrix Wmq.
The contributions of this paper are threefold: (1) We propose a PNA-MSCPNN model with an automatic parameter setting method, achieving the precise control of firing times and firing frequency for the corresponding neurons. (2) Based on the traditional LIME, we design a modified weight matrix Wmq to further improve image contrast and reduce computational complexity. (3) We combine the PNA-MSPCNN and the modified LIME into a novel image enhancement method. We deduce an image enhancement method based on the PNA-MSPCNN and the modified LIME. The proposed method brings better image enhancement performance than the state-of-the-art methods.
The rest of this paper is organized as follows: Section 2 summarizes the related works of image enhancement. Section 3 reviews related PCNN concepts and introduces the proposed PNA-MSPCNN. Section 4 elaborates on the parameter setting method of the PNA-MSPCNN. Section 5 describes the related properties of the PNA-MSPCNN. Section 6 deduces an image enhancement method based on the PNA-MSPCNN and the modified LIME. Section 7 conducts experiments and related analysis, and Section 8 makes a conclusion.

II. RELATED WORK
Image enhancement can effectively improve raw image brightness and allow observers to see more low-light details that may not be immediately observed in raw images. Most of the existing image enhancement methods can be divided into three categories: traditional methods such as histogram equalization (HE) and Gamma correction, Retinex theory, and deep learning.
HE and its modified models try to improve image contrast. They have been widely used in the image processing field because of low computational complexity. Subcategories of histogram equalization include global histogram equalization (GHE) [17], local histogram equalization (LHE) [18], and adaptive histogram equalization (AHE) [19]. Based on LHE and GHE, Chien et al. presented a hybrid histogram equalization method [41], which can enhance regional details effectively. However, most modified HE algorithms easily distort visual information in processed images and tend to increase the contrast of background noise with a certain number of useless signals. Hum et.al proposed a HE framework named multipurpose beta optimized bi-HE (MBOBHE) [42]. Their work further improves the global HE algorithm by improving three properties: brightness preservation, detail preservation, and contrast enhancement. However, when more related properties are acquired, the computation cost of MBOBHE is rather expensive.
Gamma correction has been widely used in image enhancement and dehazing. Zheng et al. [43] proposed a new image hazing solution based on gamma correction. The method uses gamma correction and spatial linear adjustment to yield a set of underexposed image sequences. Similarly, Zhu et al. [44] proposed an image enhancement method that analyzes local and global exposedness of gamma-corrected images, which guides the subsequent image fusion step.
Retinex theory thinks that the observed values of a proposed image should include the illumination values determined by the related illumination sources and the reflectance values formed by image objects. The desired image enhancement method can estimate the illumination and reflectance results to manipulate the illumination map. Fu et al. adopted an alternating minimization scheme based on a weighted variation model to estimate reflectance and illumination values [21]. Compared to traditional variation models, this scheme preserves more detailed information. But the model obtains low image contrast that can directly influence image enhancement effects. Guo et al. used the maximum pixel intensities in red, green, and blue channels as the initial illumination map [23]. The structure-aware smoothing strategy is also employed to improve illumination consistency and acquire the final illumination map. However, their proposed algorithm easily generates an over-processing result due to the empirical parameter values, including global atmospheric light and gamma transformation.
Deep learning methods have also been applied to enhance low-light images, and many related methods are proposed in recent years. Tao et al. deduced a low-light image enhancement method using a convolutional neural network (CNN) to replace the traditional image enhancement strategies [45]. In this work, there is a multi-scale feature maps to avoid vanishing gradient and preserve image textures. Nevertheless, the number of related ground truths for image enhancement is usually lack in most cases.
Wei et al. proposed an image enhancement method based on deep learning with a Decom-Net for decomposition and an Enhance-Net for adjusting illumination [24]. The related network is learned only by key constraints based on the consistent reflectance. But the Enhance-Net adopts a lowlight/normal-light image pairs training dataset, which are difficult to be applied to other image enhancement aspects.
With regard to the unsupervised methods, Jiang et al. proposed a EnlightenGAN net [33]. It is the first work that uses unpaired training to low-light image enhancement tasks. EnlightenGAN adopts one U-Net with attention mechanism as the generator and two discriminators. Further, to constrain the feature distance between input low-light image and its enhanced one, a self-regularized perceptual loss is adopted, and the loss function is then used to train the EnlightenGAN. The net is also proved to be effective as a pre-processing step for subsequent computer vision tasks. To better regularize unsupervised learning, the attentional map is also used in each deep feature level. Guo et al. proposed Zero-Reference Deep Curve Estimation (Zero-DCE) [34]. The model trains a lightweight deep network, DCE-Net, to map a set of lightenhancement curves of the input image in gray level. The estimated curves can provide subsequent estimation function and give the final output. Zero-DCE does not require any paired data for training and can obtain curve parameter maps with less time cost.
Retinex theory has a deep influence on image enhancement. Wang et al proposed a RDGAN model [35] to improve the distortion learning ability in the R component. The model contains two networks: RDNet and FENet. The RDNet learns to separate input images into illumination and reflection components. The FENet creates the final enhanced result by combining the rough enhanced CRM result, the decomposed Reflection section, and the original input image. Further, Liu et al. [36] proposed a novel model that adopts both Neural Architecture Search (NAS) and Retinex-inspired models, namely Retinex-inspired Unrolling with Architecture Search (RUAS). NAS is used for discovering illumination estimation models and removing noise. The model is efficient as compared to other NAS methods and performs better than other state-of-the-art CNN-based methods.
PCNN is an important biological neural network and has a rapid development in recent years. Lian et al. presented a modified SPCNN model (MSPCNN) [46], which optimizes several key adaptive parameters and improves image segmentation accuracy for medical images. Nevertheless, this model cannot precisely control neuronal firing times and firing frequency. Yang et al. adopted a novel heterogeneous SPCNN (HSPCNN) inspired by different cerebral cortex structures [47]. The proposed HSPCNN is constructed with three SPCNN cells to simulate the receptive field in the visual area. Guo et al. proposed a saliency motivated improved SPCNN (SM-ISPCNN) based on the saliency detection mechanism. Their work also improved image segmentation accuracy for the mammograms [48]. But key setting parameters are always set to empirical values. Deng et al. studied the fire extinguishing activities of PCNN and obtained good image processing effects [49]. However, the given PCNN model still needs the verifications of related experiments based on the professional dataset. As PCNN has many parameters needed to be adjusted empirically, another line of improvements focuses on automating the set of parameters. Panigrahy et al. proposed a novel parameter adaptive DCPCNN (PA-DCPCNN) [50], in which the parameters are automatically adjusted according to each input pixel. The same authors used fractal dimension to estimate parameters of PCNN and proposed weighted parameter adaptive dual channel PCNN (WPADCPCNN) [51]. The model is applied to fuse medical images in NSST domain.

A. BASIC PCNN MODEL
PCNN originated as Eckhorns cortical model and was further developed by studying the dynamic and static properties of neurons in the cat visual cortex. Nobly, the basic PCNN model designed as a single-layer net is proposed by Lindblad and Kinser [52] and has several key input and output terms throughout the computing, including external stimulus, internal activity, dynamic threshold, and pulse output.
Since PCNN can obtain sufficient temporal and spatial information from external stimulus and adjacent neurons, it has great potentials in the image processing field. PCNN provides a one-to-one correspondence relationship between net neurons and image pixels. The mathematical expressions of the basic PCNN are described as follows: In Eqs. (1)-(5), neuron Nij in the position (i,j) contains two inputs: feeding input Fij[n] and linking input Lij[n], which record previous states and adjust the input values by exponential decay factors e -f and e -l , respectively. They also contain the interaction results of neighboring neurons with the synaptic weights matrices Mijkl and Wijkl, respectively. The feeding input Fij[n] also provides an external stimulus Sij in Eq.
(1). These two inputs are modulated by the linking strength  to generate an internal activity Uij[n], which is compared to a dynamic threshold Eij[n] to form the pulse output Yij [n]. Subsequently, if neuron Nij fires, the dynamic threshold E would increase by amplitude V suddenly, if neuron Nij does not fire at the nth iteration, the dynamic threshold E can decay by a factor e -e at the (n+1)th iteration.

B. PNA-MSPCNN MODEL
Nowadays, there are many popular modified PCNN models, such as the simplified PCNN model (SPCNN) [53] and the parameter-adaptive PCNN model (PA-PCNN) [54]. Based on the SPCNN and the SCM [55], Lian et al. proposed a modified SPCNN model (MSPCNN) [46], which can combine and optimize several setting parameters to decrease computational complexity and improve image segmentation accuracy for the most common type of medical images. The corresponding formulae of the MSPCNN are given as follows: In Eqs. (6)- (12), UMij[n] and EMij[n] denote the internal activity and the dynamic threshold of an assigned neuron in position (i, j), respectively. YMij[n] is a compared result between UMij[n] and EMij [n-1] in the nth iteration. WMijkl is a synaptic weight matrix, which gives the interaction results between a central neuron and its corresponding neighbors. αM is a decay factor controlling neuronal previous state in the iteration. VM is an amplitude parameter of dynamic threshold EMij[n], and can adjust the refiring time of neurons.
The previous modified PCNN models have good image processing performances due to its pulse response mode similar to biological vision mechanism. However, these models focus mainly on the variations of the setting parameters between internal activity and dynamic threshold. The corresponding models of pulse response based on the firing times and firing frequency should be studied for further exploring the inherent properties and behaviors of the PCNN. Therefore, we propose a pulse-number-adjustable PCNN model (PNA-MSPCNN) derived from the above MSPCNN in this paper. The PNA-MSPCNN eliminates the linking strength  to highlight the synaptic weight matrix Wijkl, which has an obvious difference compared to the MSPCNN, and its discrete model is given as follows: According to Eqs. (13)- (17), neuron Nij in the position (i, j) contains two inputs: feeding input Fij[n] defined as the external input stimulus and linking inputs Lij[n] that denotes neighboring outputs based on synaptic weight matrix Wijkl. These two terms are modulated to generate an internal activity Uij[n], which is compared to a dynamic threshold Eij[n-1] to form a pulse output Yij [n]. Subsequently, if neuron Nij fires, the dynamic threshold E would suddenly increase by a dynamic threshold amplitude V, a step function Q, and an auxiliary parameter P, and if neuron Nij does not fire at the nth iteration, the dynamic threshold E can gradually decay by the above auxiliary parameter P at the (n+1)th iteration. The PNA-MSPCNN contains one step function Q, one synaptic weight matrix Wijkl, and three automatic setting parameters , V, and P.
The PNA-MSPCNN can control the pulse burst number of any fired neuron to analyze and handle the subsequent image tasks because it provides a reasonable model structure and parameter setting method than previously experienced. The network structure of the PNA-MSPCNN model is given as shown in Fig. 1. A graphical comparison between our proposed PNA-MSPCNN and MSPCNN is in Fig. 2:

IV. PARAMETER SETTING METHOD BASED ON THE PNA-MSPCNN
In order to explain the proposed PNA-MSPCNN reasonably, we still need to give the derivation details of the above several key parameters in the subsequent parts. We firstly design the synaptic weight matrix Wijkl based on the previous MSPCNN, then deduce the dynamic threshold amplitude V by analyzing the variations of internal activity U and dynamic threshold E for the maximum pixel intensity of each image. In addition, we obtain the decay factor  according to the iteration processing results of the minimum intensity pixels from a processing image.

A. SYNAPTIC WEIGHT MATRIX Wijkl
Synaptic weight matrix Wijkl always exhibits the linking outputs of neighboring neurons in an effective pulse cycle, and linking strength  can reflect the influences of neighboring neurons for most PCNN models. Obviously, they could get the interaction results between a central neuron and its neighboring neurons. In the PNA-MSPCNN, we will remove the linking strength  and retain the synaptic weight matrix Wijkl to reduce the computational complexity and control the pulse burst number of fired neurons precisely.
The new parameter conjoins the advantages of both linking strength and synaptic weight matrix and contains several key achievement steps below. Firstly, we adopt a rotationally symmetric Gaussian lowpass filter with the standard deviation =2 as the original synaptic weight matrix, which guarantees the neighboring output results of each neuron influenced by a central neuron and its eight neighbors. The above lowpass filter is adopted based on the basic Gaussian model where σ is the standard deviation for the used filter. Compared to usual PCNN models, we adopt a (2k+1)  (2k+1) square matrix with k=1 as the size of the above lowpass filter. Moreover, the numerical ranges of i and j are 3  i  1 and 3  j  1, respectively. The final synaptic weight matrix is given as follows: In Eq. (19), the synaptic weight matrix Wijkl(o) represents the rotationally symmetric Gaussian lowpass filter with the standard deviation =2. the night synaptic strength values for the Wijkl(o) have a small difference because similar neighboring outputs can generate a more reasonable calculation result and provide a new linking strength mode similar to real neuron behaviors. Especially, the neighboring outputs of a central neuron are partly affected by itself. This is because a real firing neuron can also receive its pulse output at the previous iteration.
To get the combined parameter Wijkl based on the original synaptic weight matrix, we still need to consider its linking coefficient. According to [46], the linking strength  of the MSPCNN is expressed as where the parameter S' denotes the normalized Otsu thresholding for a processed image. As the decay factor is set to =ln(1/S) in the MSPCNN,  can be rewritten as According to a large number of related experiments, the parameter  always generates an unsuitable value and is easy to get bad image enhancement effects and bring overenhancement results. To avoid this situation, through a large number of related experiments, we adopt the e-2 in the PNA-MSPCNN to replace the  in the MSPCNN. Obviously, the e-2 is regarded to be suitable for the setting parameter.

B. DYNAMIC THRESHOLD AMPLITUDE V
The corresponding neurons in the PNA-MSPCNN can output neuron pulses due to the comparison results between internal activity U and dynamic threshold E at each iteration. Dynamic threshold amplitude V can effectively adjust the pulse burst number of each neuron. On the research, the parameter V will be deduced to acquire an adaptive mathematical expression.
Hereinto, we will focus on the variations of internal activity and dynamic threshold for the corresponding neurons of maximum intensity pixels, especially in its pulse burst number within a pulse cycle.
To guarantee that the corresponding neurons with maximum intensity pixels can continually fire and output neuron pulses, the parameter V in Eq. (17) According to Eqs. (15), (23), (24), the internal activity Uij [2] in the second iteration can be described as Finally, we provide the general formula of the internal activity, and the corresponding mathematical expression is given as For the dynamic threshold, the corresponding neurons of the maximum intensity pixels for a processed image will bring the apparent variations, mainly due to the neuron pulses of each iteration. According to Eq. (17), dynamic threshold E in the first iteration can be set to Eij [1]=V. With the continued pulse outputs in the second iteration, we can obtain the subsequent mathematical expression as ij [2] (1 ) Furthermore, the dynamic threshold in the third iteration can be described as ij [3] (1 ) According to Eqs. (17), (28), (29), the general formula of dynamic threshold E is written as To ensure that the corresponding neurons of the maximum intensity pixels can output the neuron pulses at each iteration, referring to Eq. (16), the comparison result between internal activity U and dynamic threshold E should satisfy According to Eqs. (27), (30), the above inequality can be rewritten as In (33), the parameter ε1 represents a minimum value close to 0. This value is randomly generated by our running program. For the PNA-MSPCNN, all the neurons could be fired in the first iteration due to the comparison result between Uij [1] in Eq. (15) and Eij[0] in Eq. (17). This will be regarded as ineffective fired results with n=1. So, an effective firing period is acquired after the first iteration. Moreover, the maximum value of the parameter V should be less than the minimum value of the left term for Eq. (32). When n=2, the left term in Eq. (32) could get the above value. It is noted that the amplitude V can readjust the value of dynamic threshold E for fired neurons to redetermine neuronal firing condition and guarantee to generate the sufficient pulse burst number for fired neurons in a processed image.

C. STEP FUNCTION Q
The variations of dynamic threshold Eij are seen as the determining factor of dynamical behaviors for neurons. However, there are only fewer setting parameters for dynamic threshold in previous modified PCNN models. So, we added a step function Q on our research to further adjust the calculation results of dynamic threshold Eij at each iteration.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2021.3132078, IEEE Access VOLUME XX, 2017 9 As the effective values of dynamic threshold Eij for our proposed model are acquired after the first iteration, the mathematical expression of the step function Q is given as 12 02 n Q n   =    (34) In Eq. (34), we design this formula to reduce the computational complexity and control the variation ranges of dynamic threshold E. It is noted that the parameter Q can generate a large dynamical influence for high firing frequency.

D. AUXILIARY PARAMETER P AND DECAY FACTOR 
The auxiliary parameter P and the decay factor  are regarded as two key parameters because it could determine the numerical values of the dynamic threshold within an effective firing period. To get the derivation result of the above parameters, we should deduce the two parameters by calculating the general formulae of internal activity U and dynamic threshold E with the minimum intensity pixels.
The internal activity U for the corresponding neurons of the minimum intensity pixels mainly adopts the general formula in Eq. (27) because its feeding input and linking input only have a subtle variation than the corresponding neurons of the maximum intensity pixels. Its mathematical expression is given as Dynamic threshold E with the minimum pixel intensity is prone to dynamical adjustments compared to Eq. (30). The derivation results in the previous three iterations are given as follows: According to Eqs. (36)-(38), we can derive the general formula of dynamic threshold E with the minimum pixel intensity as ( 2) ij [ 1] n E n VP − −= (39) Referring to Eq. (33), the above equation can be further expressed as 3 ( 2) ij max 1 To guarantee to output one neuron pulse for the corresponding neurons of the minimum intensity pixels, referring to Eqs. (17), (35), and (40), the neuronal firing condition should be further written as In Eq. (41), the parameter should be neglected due to its small positive value. To reduce the computational complexity, the auxiliary parameter P could be defined as According to Eq. (42), the inequality in Eq. (41) is easy to be expressed as In Eq. (43), there are two calculation terms described as follows: In Eq. (48),  2 and  3 are small positive values and  3 >2. It is noted that the decay factor  is defined as a logarithm function to reasonably adjust the decaying speed of the internal activity and the dynamic threshold at each iteration. Obviously, the above three key parameters α、V, and P can be automatically obtained by image attribution values. Hereinto, the parameter α is determined by the maximum pixel intensity Smax and the minimum pixel intensity Smin in a processed image. Those two parameters are easy to be directly obtained and regarded as image attribution values. The parameters V and P are calculated only by the parameter α.

A. FIRING TIMES MATRIX
Most previous PCNN models are very hard to control neuronal firing times due to unreasonable network architecture. In this This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2021.3132078, IEEE Access VOLUME XX, 2017 9 paper, we adopt a new firing times matrix to calculate the firing times and firing frequency of each neuron within an effective pulse period. This can be considered as the most important image feature information of the PNA-MSPCNN. Its numerical range is N-1  FTij  1. N is the total iteration times of the running program for the PNA-MSPCNN. The proposed model guarantees the corresponding neurons of the minimum intensity pixels only fire once, and the most excited neurons are easy to fire repeatedly at each iteration. Significantly, the firing times matrix could bring the direct pulse burst information derived from the visual perception mechanism and is expressed as In Eq. (49), FTij is the sum of output results for a corresponding neuron in position (i, j). kij denotes the firing times of a neuron in position (i, j). Specially, we always calculate the effective firing frequency of neurons from the second iteration to the Nth iteration, because all the neurons can synchronously fire at the first iteration and generate ineffective firing results. According to the above theoretical analysis, we can also obtain the firing frequency FFij of a neuron in position (i, j) Compared to Reference [49], the firing frequency in Eq. (50) has a simplified expression. This is because our proposed model can precisely control the firing times of each neuron than previously experienced. Obviously, the firing frequency of the minimum intensity pixels within an effective firing period is 1, and that of the maximum intensity pixels is 1/(N-1). According to Eqs. (49) and (50), the firing frequency range FF of corresponding neurons are given as

B. ABLATION STUDIES
An ablation study is considered as an important verification method for better understanding the dynamical activities of the novel neural network [13]. Further, we always retain some features and remove other features to test the actual roles in the proposed model and more directly provide related data to verify the effectiveness of the designed parameters. On the research, we give two experimental groups with iteration times N=5 to separately analyze the actual roles for internal activity U and dynamic threshold E.
In the first experimental group, the mathematical expressions of the parameters Wijkl and  in internal activity U are adjusted to other parameter values, and the parameters Wijkl and  retain the previous states. The used testing image is randomly acquired by Fig. 3 with an image size of 481321 and contains red, green, and blue channels. The experimental results are given in Table. 1. In Table. 2, the synaptic weight matrix Wijkl generates larger influences than the decay factor  due to the rapid variations of neuronal firing numbers. In the second experimental group, the mathematical expressions of the parameters V and P in dynamic threshold E are set to different parameter values. The final experimental results are shown in Table. 1. In Table. 1, the experimental results of the parameter V can bring more obvious variations than the auxiliary parameter P. The above two group experiments indicate that the parameter V is the most important setting parameter for the proposed PNA-MSPCNN.

C. THE CORRESPONDING SUB-INTENSITY RANGES OF FIRING TIMES FOR EACH NEURON
For the PCNN, sub-intensity ranges can divide the pixel intensities of a processed image into different parts and correspond to a certain number of the firing neurons at each iteration. On the research, we replace the sub-intensity ranges to three main numerical ranges, including the sub-dynamic threshold ranges, the sub-internal activity ranges and the subintensity ranges. These results determine the firing times and the firing frequency of neurons for the PNA-MSPCNN.
We can obtain firing times matrix FTij by Eq. (49) and divide the corresponding pixel intensities of each neuron into a new type by the processed image. The experimental results based on firing times and firing frequency are taken from the Berkeley Segmentation Dataset (BSD) [56], containing the sub-dynamic threshold ranges, sub-internal activity ranges and the sub-intensity activity ranges at each iteration, as shown in Fig. 3 and Table. 3.
In Fig. 3 and Table. 3. the size of the testing image is 4813213. This means that the number of corresponding neurons for the processed image is 463203. The sub-dynamic threshold ranges and the sub-internal activity ranges in each channel gradually extend with the increase of iteration times. Significantly, the good variation trends between the subdynamic threshold ranges and the sub-internal activity ranges can determine the sub-intensity ranges of three channels.
Compared to the existing PCNNs, the presented PCNN have three main differences. The first difference is focused on firing times matrix FTij, which can simply and accurately calculate the firing frequency of each neuron within an effective firing period. The second difference is that it can directly give the firing frequency range of corresponding neurons according to the maximum and minimum pixel intensities. The third difference is that the mathematical formulae of the setting parameters are deduced to acquire the adaptive results rather than previously experienced.     This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication.

A. BASIC LIME
In the computer vision field, the Retinex model [24] is widely used to describe the formation of a low-light image as follows: In Eq. (52), L and R denote the original image and the desired enhanced image, respectively. T is the illumination map that is regarded as the intrinsic image property. For a processed color image, three corresponding channels can share the same initial illumination map.
represents element-wise multiplication. The above formula can directly divide the input image into the product of the desired lightenhanced scene and the illumination map. Guo et.al proposed a low-light image enhancement method (LIME) to enhance the low-light images, which focus mainly on estimating the final illumination map Ti. To achieve the final enhancement goal, the basic Retinex model in Eq. (52) can change to an inverted low-light image 1− L similar to haze images ii 1 ( where a is the global atmospheric light and its numerical range is 1  a  0. Since there is a higher feature similarity between hazy and low-light images, the above model has great potential for low-light image enhancement. Hereinto, a reasonable illumination map T is regarded as the most important setting parameter, and the global optimum solution is given as In Eqs. (55) and (56), G(x,y) is the Gaussian kernel with the setting value 2 for the standard deviation  of a processed image. |.| denotes the absolute value operator. According to Eqs. (55) and (56), the optimum solution in Eq. (54) can be further expressed as In Eq. (57), the calculation results of the quadratic term could be reasonably obtained, and the whole calculation steps do not need any training. The experimental results demonstrate that the LIME can bring better low-light image enhancement performance, compared to other prevalent algorithms.

B. A MODIFIED LIME
Based on the above basic LIME, we propose a modified LIME method for obtaining the final low-light enhancement image denoted as the parameter R in Eq. (53). Firstly, we give an initial illumination map Ti by the calculation result of the proposed PNA-MSPCNN. Secondly, a Gaussian kernel GM of size value [15 1] with the standard deviation =3 can be used to reduce the difference of the pixel intensities for the given illumination map Ti In Eq. (58), the function dist(i, j) denotes the spatial Euclidean distance in position (i, j). Thirdly, a modified weight matrix Wmq is adopted based on Eqs. (55) and (56) mq In Eq. (59), GM(i, j) denotes a limited region centered at a pixel in position (i, j). The Wmq can further adjust the pixel intensities of the whole image and add the similarity between adjacent pixels. Fourthly, a gradient illumination map Wd is deduced by the calculation results of Wmq from horizontal and vertical directions, and its corresponding mathematical expression is given as where hWmq and vWmq are the horizontal and vertical gradient matrices, respectively. The illumination map results can get some large numerical values near image contours, and a great number of small numerical values in regular texture regions, due to the above calculation rule. Fifthly, we will calculate the sum of four directions for the Wd to generate an overall parameter Aov where I is the identity matrix using a proper size. Diag(wd) is to generate a diagonal matrix based on the gradient illumination map Wd. Dd is the Toeplitz matrices based on the forward difference of discrete gradient vectors in horizontal and vertical directions. Aov is a symmetric positive definite Laplacian matrix in [23]. According to the inverse of Aov and initial illumination map Ti, we can acquire the final illumination map Tf the raw image, and a is designed by a certain number of the experiments. Tf is acquired via the PNA-MSPCNN and the modified LIME. So, we are easy to obtain reasonable image enhancement result via the above-derived formulas. The proposed enhancement method based on the PNA-MSPCNN and the modified LIME Input A low-light image, its length m, its width n, and total iteration times N for the PNA-MSPCNN.

Output
The final enhanced result.
Step 1 Now, Compute the initial brightness spectrum by selecting the maximum pixel intensities of red, green, and blue channels.

C. THE ACHIEVEMENT STEPS OF THE PROPOSED METHOD
In our research, the proposed PNA-MSPCNN model with the automatic parameter setting method can effectively control neuronal firing times, which always redivide corresponding image pixels into new classifications. The firing times of a neuron can directly reflect the illumination action of the corresponding pixel. The corresponding flowchart and pseudocode of our proposed method can be provided in Fig. 4 and Algorithm 1, respectively. The related achievement steps can be acquired as follows: (1) Setting the initial illumination map Ti via the Retinex theory.
(2) Deducing the gradient illumination map Tg according to the calculation results of the PNA-MSPCNN.
(3) Modifying the weight matrix Wmq based on the traditional LIME method.
(5) Acquiring the final image enhancement result R by the previous Retinex model.

VII. EXPERIMENT AND ANALYSIS
In this section, we describe related experiments and provide detailed analysis. Our experiments contain subjective cognition and objective evaluation. Hereinto, the objective evaluation experiment is divided into two groups. The first group has 500 images taken from the BSD [56], and the second one has 25 images chosen from widely used low-light images. All the experimental images are obtained from publicly available datasets.

A. SUBJECTIVE COGNITION
In the subjective cognition, we observe the detailed information of image enhancement results, as shown in Fig.6.
In Fig. 6, the PNA-MSPCNN shows a better image enhancement performance from the subjective cognition. We particularly observed that, in low-light regions, the PNA-MSPCNN shows better the experimental results compared to other methods. In the first image, our method better reveals the shape of the tree in the red rectangular and subtle details of the buildings in the yellow rectangular. In the second image, the PNA-MSPCNN improves image brightness for the chairs and the house of the riverside in the red rectangular and the yellow rectangular, respectively. Compared to RDGAN, EnlightenGAN, and Zero-DCE, our method obtains better image enhancement effects in low-light regions.

1) COMPARISON ALGORITHMS AND EVALUATION METRICS
Comparison algorithms are used in the related experiments. We adopted popular low-light image enhancement methods using both traditional algorithms: simultaneous reflectance and illumination estimation (SRIE) [21], A probabilistic method for image enhancement (PIE) [22], low-light image enhancement (LIME) [23] and deep learning algorithms; RetinexNet (RN) [24], MBLLEN [25], EnlightenGAN [33], RDGAN [35] and Zero-DCE [37]. Both SRIE and LIME based on Retinex theory can separate processed images into the reflectance and illumination results. PIE adopts an underwater image enhancement method to acquire more valuable information and unveil more details. RetinexNet (RN), RDGAN, MBLLEN and Zero-DCE are four popular deep learning algorithms that require a large number of lowlight dataset and a significant amount of training. Additionally, the EnlightenGAN (En-GAN) and Zero-DCE does not need the end-to-end training.
Four widely used evaluation metrics, including absolute mean brightness error (AMBE) [57], mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity of image (SSIM) [58], are used to evaluate the image enhancement performance of the proposed method.
Here AMBE is defined as the absolute difference of pixel intensities between an input image and its corresponding output image where MB(X) and MB(Y) denote the mean intensities of input image X and output image Y, respectively. Lower AMBE values represent better brightness preservation capacity for a processed enhancement image. MSE is the mean square error of an input image X and its enhanced image Y, and the mathematical expression is written as where m and n denote the image pixel lengths from horizontal and vertical directions, respectively. MSE reflects the similarity of pixel intensities between input images and enhanced images. PSNR is a signal-to-noise ratio calculated by the above MSE 2 max PSNR 10lg MSE In Eq. (66), Imax is the maximum pixel intensity for a processed image. A higher PSNR value means that the processed image has better image enhancement quality. SSIM is a key image enhancement metric to evaluate the structural similarity of two used images 12 where x and y are the average pixel intensities of raw images x and y, respectively. x 2 and y 2 are the variances of x and y, respectively. C1 and C2 are stability constants. Their mathematical expression is given as where L denotes the dynamic range of pixel intensities. K1 and K2 are the constants. The numerical ranges of SSIM are provided between 0 and 1.

2) EXPERIMENTAL RESULTS AND RELATED ANALYSIS
In the experiments, seven comparison image enhancement methods and four key evaluation metrics are demonstrated.
In Figs. 7-8 and Table. 4, compared to other competitive algorithms, our proposed method brings better visual brightness and acquires the best evaluation results for the three key metrics, presenting AMBE of 0.0550, MSE of 0.0092, and PSNR of 45.7764. For the evaluation metric SSIM, our evaluation value is higher than LIME, MBLLEN and Zero-DCE. This result indicates that our enhancement method can obtain good structural similarity for a processed image.
In Figs. 9-10 and Table. 5, our image enhancement method has good achievement results, including AMBE of 0.0558, MSE of 0.0122, and PSNR of 44.0667. The evaluation value of our method for the metric SSIM is only less than that of PIE. The experimental results demonstrate that the proposed method has better image enhancement performance than other prevalent image enhancement algorithms.
In addition, to prove the efficiency of our method, we make Tables. 6 and Table. 7 to compare the running time of the listed algorithms. In Tables 6.1 and 6.2, it is observable that our method is less efficient than LIME and PIE but runs faster than all deep learning methods. In Tables 7.1 and 7.2, our method has a lower computational complexity than other competitive methods except LIME and PIE.      Similar to the results shown in Fig. 6, one significant advantage of our method is its ability to enhance the overall image brightness while keeping large contrast ratio. Therefore, as shown in Figs. 11.1 and 11.2, the processed images can obtain fine image enhancement details, such as clear gaps between vessels and clear organs structure.

VIII. CONCLUSION
In this paper, we propose a PNA-MPCNN model with an automatic parameter setting method. The proposed model can precisely control the pulse number of each firing neuron within an effective pulse cycle. Moreover, we present a lowlight image enhancement method based on the above PNA-MSCPNN model and a modified LIME algorithm. A weight matrix in the modified LIME is redesigned to reduce the difference of pixel intensities for similar regions. Experimental results show that our proposed image enhancement method brings good visual appearance and high image contrast. However, the proposed PNA-MSPCNN do not give the related explanations of back-propagation theory and provide the supervised-learning cases on the research. In particular, we regard that our method will achieve a better performance if supervised learning methods are applied. In the future, we plan to continually optimize our proposed model by considering adding back propagation theory or other supervised learning methods and extend its image processing application scenes.