Gray Level Image Contrast Enhancement Using Barnacles Mating Optimizer

Image contrast enhancement is a very important phase for processing of digital images. The main goal of image contrast enhancement is to improve the visual quality by improving the contrast level of images which were distorted or degraded due to casual acquisition of images. The most popular method to perform this task is Histogram Equalization (HE). However, the exhaustive approach taken during HE is an algorithmically complex task. In this paper, we have considered image contrast enhancement as an optimization problem, where a new meta-heuristic algorithm, called Barnacles Mating Optimizer (BMO) is used to find the optimal solution for this optimization problem. A grey level mapping technique is used here to convert an image to a solution of the optimization problem. The algorithm has been evaluated on five publicly available datasets: Kodak, MIT-Adobe FiveK images, H-DIBCO 2016, and H-DIBCO 2018. It is also applied on some standard images like Boy, Lena, Lifting body and Zebra. The obtained results clearly display the effectiveness of the proposed method. The results obtained on the Kodak images are compared with many state-of-the-art methods present in the literature, and the comparison proves the superiority of the proposed method. To test the applicability of BMO in solving real world problems, we have applied it as a pre-processing step in binarization of H-DIBCO 2016 and H-DIBCO 2018 datasets. The source code of this work is available at https://github.com/ahmed-shameem/Projects.


I. INTRODUCTION
Image processing is a very popular and active area of research in the field of computing as it has various real-life applications, like medical image processing, optical character recognition, biometric applications, industry and transportation, to name a few. However, it consists of all the operations applied on digital images which aim at changing the structural characteristics of the images. Contrast enhancement is a process that is applied on images to increase their dynamic range. The main goal of image contrast enhancement is to increase the readability and interpretability of the information present therein. Whenever we talk about an enhancement algorithm, we simply mean that the algorithm in discussion produces better-quality image to be used for some particular The associate editor coordinating the review of this manuscript and approving it for publication was Gulistan Raja . application. It is generally achieved by suppressing the noisy pixels or increasing the contrast [1], [2]. Now, a question arises, why is there even a need of image enhancement? The answer is quite obvious. Quality of digital image suffers due to several factors, like contrast, illumination and noise during the image acquisition process. So, what is image enhancement? In simpler terms, it is defined as the separation factor between the brightest spot and the darkest spot in images [3]. This factor indicates high contrast or low contrast depending on its value. In spatial domain, Histogram Equalization (HE) is a most commonly used method, and its modification is proposed in [4]. It scales the magnitudes of the probability density function of the original input image before applying HE. The scaling factor is altered in an adaptive way depending on the average image intensity values of an image.
Image enhancement changes the photo-metric characteristics of an image to make it better than before for automatic processing of image through computing devices. In general terms, there are two different methods which are used to enhance the image attributes: filtering techniques and contrast enhancement. Filtering techniques substitute every pixel of the input image by a value which is calculated from the original value of that pixel and the neighboring pixel values. Whereas, contrast enhancement, a mapping is brought into action between the grey level values of the input image and a new set of grey level values in order to get a more homogeneous distribution of the corresponding foreground and background pixels [5], [6]. In other words, we can say that contrast enhancement allows the histograms of grey levels to become flatter. One of the simplest ways to achieve contrast enhancement is the method called global intensity transformation. This method uses lookup tables of pixel values, then the intensity levels of an image are mapped into a set of new grey levels. This method performs intensity transformation on all grey levels of the image. Unlike global techniques, local techniques either apply different functions on pixels of the input image or different areas of the image, or they use different parameters in the same function [7], [8] to bring differences.
One category of image quality measure includes numbers of edge pixels, intensities of these pixels and the entropy of the whole image [9]. These measures have been applied for image enhancement in Particle Swarm Optimization (PSO) [7], Cuckoo Search (CS) [8] and Differential Evolution (DE) [10] by considering the image enhancement problem as an optimization problem.
In this paper, a global optimization technique called Image Enhancement using Barnacles Mating Optimizer (iEBMO) for image contrast enhancement is proposed. It utilizes the idea of mapping grey levels of input images into a new set of grey level values. Barnacles Mating Optimizer (BMO) [11], a new meta-heuristic algorithm, is used to perform this task. To the best of our knowledge, this is the first time, BMO is adapted to image contrast enhancement. In a nutshell, the main contributions of this work are as follows: • BMO, a recently proposed meta-heuristic algorithm, is applied for the purpose of image contrast enhancement, called iEBMO.
• The proposed method is evaluated on images taken from benchmark datasets -Kodak [12] and MIT-Adobe FiveK [13].
• iEBMO is also applied on standard images like Boy, Lena, Lifting body and Zebra.
• It is compared with nine other methods including HE, improved versions of HE and different optimization algorithms like GA, CS, PSO and ABC.
• To check the robustness of the proposed method, we have applied iEBMO as a preprocessing step of binarizing the images of H-DIBCO 2016 and H-DIBCO 2018 datasets. The rest of the paper is organized as follows: Section II presents brief idea of some of the methods proposed in the field of image enhancement. Section III provides an overview of the BMO algorithm. In Section IV, we have discussed iEBMO in detail along with agent representation, fitness function and the procedure of implementing BMO in image contrast enhancement. Then in Section V, the input, output and ground truth images along with their corresponding histograms are given, and also a detailed analysis of the proposed method is given by testing it on the Kodak [12], H-DIBCO 2016 [14] and H-DIBCO 2018 [15] benchmark datasets. For comparison purpose, we have considered Kodak [12] images and and some standard images. Finally, Section VI provides conclusive remarks and an idea of the future scope of this work as well.

II. LITERATURE SURVEY
In the field of image contrast enhancement, several methods have been proposed. Histogram Equalization (HE) [16] is one of the most popular methods. It performs contrast enhancement by effectively spreading out the most frequent intensity values, i.e., stretching out the intensity range of the image. This method usually increases the global contrast of images when its usable data is represented by close contrast values. This allows for areas of lower local contrast to gain a higher contrast. Although, HE has limitations, i.e., the local contrast of an image can not be equally enhanced, over-enhancement of noise and artefacts can be easily found in the local histogram equalization enhanced images. In this section, we have discussed some of the approaches related to HE which tries to overcome the limitation.
Adjacent-block-based modification for local histogram equalization algorithm (ABMHE) [17] proposes a technique which overcomes the limitation of HE by segmenting the image into three kinds of overlapped sub-blocks using the gradients. To overcome the over-enhancement effect, the histograms of these sub-blocks are then modified by adjacent sub-blocks. An optimal adaptive thresholding based sub-histogram equalization for brightness preserving image contrast enhancement [18] puts forward an adaptive thresholding based sub-histogram equalization (ATSHE) scheme for contrast enhancement and brightness preservation with retention of basic image features. The histogram of the input image is divided into several sub-histograms using adaptive thresholding intensity values. The number of threshold values or sub-histograms of the image depends on the peak signal-to-noise ratio (PSNR) [19] of the thresholded image. Histogram clipping is also used in this work to control the undesired enhancement of resultant image to avoid overenhancement. This work [20] investigates a contrast enhancement algorithm which utilizes grey level S-curve transformation, locally in medical images obtained from various modalities. This is an extended gray level transformation technique that forms sigmoid alike curve through a pixel to pixel transformation. The main objective of this transformation is to increase the difference between minimum and maximum gray values and the image gradient, locally.
Recently, meta-heuristic optimization algorithms [21] have become popular amongst researchers. These algorithms have VOLUME 8, 2020 proved their superiority in different fields: medicine, data mining, pattern recognition, finance [22]- [24] etc. Many such optimization algorithms have been used for image enhancement and established their superiority over the traditional approaches like HE. In the following, some relevant methods based on meta-heuristics for image contrast enhancement is discussed.
The Genetic Algorithm (GA) [25] for image contrast enhancement is proposed in [5]. This work uses GA to find the optimal mapping of the gray levels of the input images with new gray levels which produce better contrast for the image and as a result the dynamic range of the image increases, which is the reason behind better quality of the image. The Artificial Bee Colony (ABC) [26] algorithm for image enhancement is proposed in the work reported in [27]. The image contrast enhancement optimization problem is regarded as a foraging process of bee colony. The position of a food source denotes a possible solution of this image contrast enhancement problem. The fitness value of a food source represents the quality of the associated solution. The Cuckoo Search (CS) [28] algorithm for image contrast enhancement is reported in [8]. The CS algorithm is based on the parasitic breeding behavior of the cuckoo bird. The cuckoo bird depends on other host birds nest's for laying its eggs. The host bird nurtures the egg assuming it as its own. The basic objective of cuckoo search algorithm is to find the best nest where the probability of hatching of an egg is maximum. A generation is represented by a set of host nests. The best nest which carries the egg is the solution. The best nests in each generation are carried over to the next generation. CS is used to find the best set of these parameters. CS is also used for maximizing the fitness function. Particle Swarm Optimization (PSO) [29] for image enhancement is proposed in [7]. PSO algorithm is a multi-agent based search strategy modeled on the social behavior of organisms such as bird flocking and fish schooling. It uses a transformation function, which incorporates both local and global information of the input images.
There are several such algorithms for image contrast enhancement. No Free Lunch theorem [30] states that a single optimization algorithm can not solve every type of optimization problem. Some are better fit for some particular problem but maybe do not produce expected results in other problems. This motivates the researchers to propose a new optimization algorithm which produces better results than the past proposed methods. As image contrast enhancement is a very important pre-processing step, it's results have a fair amount of weightage in image processing. Every method which is already proposed in this domain produces better results than the previous. This inspired us to come up with a new method which produces even better results than the rest.
BMO is a recently proposed meta-heuristic optimization algorithm which has been previously applied to optimize 23 benchmark functions and optimal reactive power dispatch problems [11]. In this work, we have used BMO to enhance the contrast of degraded images.

III. BARNACLES MATING OPTIMIZER: AN OVERVIEW
BMO [11] is an evolutionary algorithm inspired from micro-organisms called barnacles, which exist since Jurassic times. Most of the barnacles have both male and female reproductions.
In BMO, a solution is represented by a barnacle, which is analogous to a chromosome in GA and particles in PSO. The initial population in BMO is generated randomly using a uniform distribution, with the lower limit a and upper limit b as range of the possible solution.
Therefore, a single barnacle B is represented by a D-dimensional real vector, B ∈ [a, b] D . During mating two barnacles are chosen randomly (using a uniform probability) as parents from the population B, say father brn F and mother brn M . If the distance between brn F and brn M is less than or equal to the penis length pl of brn F , mating occurs between brn F and brn M , and an offspring is produced by here α is a normally (Gaussian) distributed pseudo-random number with µ = 0.5, σ 2 = 0.1, but truncated to interval [0, 1]. Basically α and 1 − α are representing the percentage of characteristics of father and mother that are embedded in the next generation of the new offspring.
If the distance between brn F and brn M is larger than pl of brn F , sperm casting occurs that is expressed by here r is a pseudo-random number drawn from a uniform distribution of (0, 1]. Note, the new offspring is generated from the mother barnacle alone by a sperm casting process. This is because it receives the sperm from the water that has been released by many other barnacles elsewhere and without detailed information of the father. The value of pl plays an important role in this algorithm. Larger pl-values imply more exploration and smaller pl-values imply more exploitation. If exploration is not considered properly, the algorithm may get stuck in local optima. Whereas if exploitation is not properly taken care of, the algorithm may show slow convergence or in worst case, may not even converge. In [11], authors have performed exhaustive experiments to determine the optimum value for pl, and following their work, pl = 0.6 · N is set in this work, where N denotes the population size. A fitness function f is defined as a real-valued function f : [a, b] D → R mapping a fitness value f (B) to a barnacle B. In our application we are using a special image representation scheme, together with a fitness function designed for the evaluation of grey scale images.
It is to be noted that every meta-heuristic algorithm tries to ensure a proper trade-off between its exploration and exploitation phases to find out the optimal solution. BMO also does this by considering some assumptions which are mentioned below: 1) The selection process is done randomly but it will be restricted to the penis length of a barnacle, pl.
2) Each barnacle may contribute sperm as well as receive sperm from other, and each barnacle can be fertilized by one barnacle only and one at a time. This is depicted in Equation 1. 3) If the selection at the certain iteration is more than the pl has set, then 'sperm cast' process is occurred. The sperm cast process is given by Equation 2. The first two assumptions enforce the exploitation and the last assumption ensures the exploration process as the new offspring is produced from mum only.
In Algorithm 1, the pseudocode of BMO is described.
Select brn F and brn M randomly from B 7: if distance(brn F , brn M ) ≤ pl then 8: Draw α ∈ (0, 1) 9: Compute offspring B new using Equation 1 10: Include B new into sorted B 16: Remove B N +1 from B 17: t = t + 1 18: end while 19: return B 1 (the best barnacle from B)

IV. PROPOSED IMAGE CONTRAST ENHANCEMENT APPROACH
It has already been mentioned that the present work is related to image contrast enhancement using BMO, where we map the grey levels of an input image into a new set of grey levels to produce better quality image. This technique at the end yields more homogeneity to the image histogram. We have used this meta-heuristic algorithm for this task, as an exhaustive search technique of better mapping is very time consuming.
As we have considered image enhancement as optimization problem, it gives rise to the necessity of defining two important features of the algorithm: the representation of candidate solutions and the objective or fitness function. In the following, we define and explain these two aspects, then the use of BMO to accomplish the task of getting optimal results in image enhancement is presented.

A. AGENT REPRESENTATION
To apply BMO in order to solve the image contrast enhancement problem, we need to convert the image to a vector [5], which is considered here as an agent, rather a barnacle in present work. This image to agent transformation must be done in a way such that the barnacle, representing an input image, reserves all its characteristics of the image, and furthermore a re-transformation must be defined at the same time that is reconstructing an optimized (contrast enhanced) image from a given agent or barnacle.
Here, the solution of the problem is to represent the image by an ordered vector of D integers in the interval [0, 255] of grey values (i.e. a = 0 and b = 255), D denotes the number of unique grey levels appearing in the input image I , at the same time it is the dimension of the BMO algorithm.
Given the set of images of size n × m with 8-bit grey values g ∈ {0, . . . 255} the set of all possible images is given by For an image I ∈ I we define the set of grey values appearing in I by Here we may assume that G(I ) is increasingly sorted, and represented as a vector G(I ) := (g 1 , . . . , g D ) of dimension D ≤ n · m. Furthermore, for a fixed grey value g ∈ [0, 255] we define the sub-region R g (I ) ⊂ [1, n] × [1, m] of the image I at grey level g: Obviously, the image I ∈ I can be represented by its grey value vector G(I ) = (g 1 , . . . , g D ) together with the regions R(I ) = (R g 1 (I ), . . . , R g D (I )) of grey value levels g 1 , . . . , g D .
It is clear from the construction that any given image I ∈ I can be transformed to grey values G(I ) and corresponding grey value regions R(I ), and vice versa, given the pair G(I ) and R(I ) the image I ∈ I can be exactly re-constructed.
Besides this transformation, even a more general image operation is possible: given a strictly increasing sequence of D grey values defined through a vectorG = (g 1 , . . . ,g D ) of length D (e.g. the best barnacle vector generated by the BMO algorithm) can be used together with D sub-regions represented by R(I ) to generate an new imageĨ ∈ I. This image has the same number of grey values and the similar structure as the original image I ∈ I, e.g. satisfying the following conditions for all pairs of pixel locations, say (x, y) and (x , y ): (1) I (x, y) = I (x , y ) if and only ifĨ (x, y) = I (x , y ), and (2) I (x, y) < I (x , y ) if and only ifĨ (x, y) < I (x , y ).

B. FITNESS FUNCTION
Now, to evaluate the quality of the solutions (agents) in the BMO algorithm, we use a formula which utilizes the number VOLUME 8, 2020 // Following the procedure shown in Figure 1 of edge pixels, the intensity of edge pixels and the entropy of the whole image. This formula is given in Equation 6, which is used in [7], [8], [10], [31], [32]. For the evaluation the fitness of a barnacle Z in the BMO-Algorithm the grey value regions R(I ) of the original image I must be used to constructed an image I (Z ) for the calculation of the fitness F(Z ) using the corresponding image I (Z ): Now, let's discuss the components of this fitness function.
• F(Z ) represents the quality of the output image obtained by using the mapping represented by the solution vector Z on input image I .
• E(I (Z )) represents the sum of edge intensities of the image. The enhanced image is desired to have larger value than the original low-contrast image. It can be obtained by first applying the image edge detector, here we have considered Canny edge detector [33], followed by calculating the summation of intensities of edge pixels. Canny edge detector is used because it is adaptable to various environments. Moreover, its parameters permit it to be customized to recognize the edges with different characteristics.
• n(I (Z )) represents the number of edge pixels of the enhanced image. The enhanced image should be sharper, which means it has more edge pixels than the original low-contrast image. It can be calculated by counting the number of pixels whose intensity value is above a particular threshold value in the Canny edge image.
• H (I (Z )) represents the entropy of enhanced image. It is calculated by: where, h i is the probability of occurrence of the i-th intensity value of the image.
• H × V represents the size of the image. Our target is to find the mapping for which the fitness value is maximum, as the expected output image should have more numbers of edges with higher intensity, and a higher contrast.
To show the effectiveness of each component and combinations of components of the fitness function, a table is provided for experimentation. Table 1 shows the obtained results based on PSNR [19], SSIM [34] and VIF [35] values.
The flowchart of the proposed model is presented in Figure 2.

C. COMPUTATIONAL COMPLEXITY
The computational complexity of any algorithm depends on the dimension of the problem, the maximum number of iterations and some other operations that we perform. It is usually given by O notation. The computational complexity of iEBMO is O(maxIter ×popSize×D s ×t fit ), where maxIter represents the maximum number of iterations, popSize represents the number of candidate solutions, D s represents the dimension of the search space, and t fit indicates the required time for calculating the fitness of a particular solution.

V. RESULTS AND DISCUSSION
This section presents the experimental results and their analysis over different degraded images. The quantitative measure has been made with nine well-known state-of-the-art algorithms to ensure the applicability of iEBMO in enhancement of image contrast.

A. EXPERIMENTAL SETUP
To show the effectiveness on ICE, iEBMO has been tested on Kodak dataset [12], released by Eastman Kodak Company for unrestricted usage. The dataset contains 24 lossless, true color (24 bits per pixel) images having 756 × 512 resolution and the images are available in PNG format. This dataset is widely used for testing in image processing and image compression techniques. The image samples were converted to their corresponding gray scale images, which serve the purpose of ground truth (GT) in our experiments. The contrast of the GT images are then reduced to 10% which serve as the inputs of our method.
The proposed approach is compared with nine state-ofthe-art contrast enhancement methods. These are (i) conventional histogram-based image contrast enhancement approaches [4], [36], [37]; (ii) GA-based image contrast enhancement approach [5]; (iii) CS-based image contrast enhancement [8]; (iv) PSO-based image contrast enhancement [38]; (v) ABC-based image contrast enhancement approaches [9], [27], [39]. The performance evaluation is conducted using three criteria: (i) Peak Signal-to-Noise Ration (PSNR) [19]; (ii) Structural Similarity Index Measure (SSIM) [34]; (iii) Visual Information Fidelity (VIF) [35]. The performance evaluation is conducted using three criteria: 1) Peak Signal-to-Noise Ration (PSNR) [19]: It is ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. The PSNR usually expressed in terms of the decibel (dB) scale. We define PSNR by the mean squared error (MSE). Given a noise-free H × V monochrome image I and its noisy approximation K, MSE is formulated as: The PSNR is formulated as: where, R is the maximum possible pixel value of the image. For an 8-bit image it is 255. 2) Structural Similarity Index Measure (SSIM) [34]: SSIM is a perception-based model that considers image degradation as perceived change in structural information, while also incorporating important perceptual phenomena, including both luminance masking and contrast masking terms. Structural information is the idea that the pixels have strong inter-dependencies especially when they are spatially close. The SSIM index is calculated on various windows of an image. The measure between two windows x and y of common size N × N is: µ x and µ y are the mean intensities of x and y; σ 2 x and σ 2 y are the variance of x and y; σ xy is the covariance of x and y; c 1 = (K 1 L) 2 and c 2 = (K 2 L) 2   where, I R is the reference image information and I E is the enhanced image information in j th sub band respectively. In our experiment, GT image represents as the reference image so that the VIF value ranges between 0 to 1.

B. PARAMETER TUNING
For any meta-heuristic, parameters play a very important role in determining the optimal solution i.e., finding appropriate parameter values make the search process more efficient and effective. However, in most of cases finding a proper parameter value is not an easy task, rather it needs exhaustive experiments, thereby making the optimization process time consuming. So, we have taken 9 Kodak images to perform experiments. We have calculated the PSNR, SSIM and VIF values by changing the population size as: 5, 10, 20, 30 and 50. The variations in the result are depicted in Figure 3, Figure 4, Figure 5. From all these figures, we can conclude that consideration of 10 as population size is a reasonable choice. We have also shown how the fitness value varies with number of iterations in Figure 6. These graphs show the convergence towards the optimal solution gradually in successive iterations. From these graphs, we can observe that around 20 th iteration, we obtain the best fitness value in almost all the cases. So, we have used population size as 10 and maximum number of iterations as 20, pl as 0.6· population size for further experiments. Parameters are set these values by considering a trade off between computational cost and obtained result.

C. PERFORMANCE EVALUATION 1) PERFORMANCE ON KODAK DATASET
This section presents the performance on Kodak dataset [12] by our proposed method. The obtained PSNR, SSIM and VIF values for all the enhanced 24 kodak images are tabulated in Table 2. It is noticed that, the obtained PSNR, SSIM and VIF values are quite significant and thus, their mean values outperforms the state-of-the-art methods, stated in [27]. The quantitative comparison with ICE state-of-the-art methods are displayed on Table 3. The table provides the comparison results of iEBMO with other nine state-of-the-art methods [4], [5], [8], [9], [27], [36]- [38] and [39] based on PSNR, SSIM, VIF values, on Kodak dataset [12]. For this purpose, the comparison table presents the average values of the three mentioned criteria of the 24 Kodak images. A greater value indicates better image quality. It is very clear from Table 3 that iEBMO is superior to the other nine methods. In terms of PSNR values, we can see that iEBMO beats other methods with significant margin. The work mentioned in [27] has produced the second best result  for this criterion. The PSNR difference between this and the proposed method is 8.435, which is quite impressive. In terms of both SSIM and VIF, iEBMO has produced the best result, which proves its superiority over other methods.
For visual comparison, we have plotted the histograms for input degraded images, corresponding resultant images as well as ground truth images. Figures 7-9 show the obtained results. In these figures, the first column represents the input image, second column represents its corresponding histogram, third column represents the output image and fourth column shows its corresponding histogram; lastly, the fifth and sixth columns represent the ground truth image and its corresponding histogram respectively.
The image histogram acts as a graphical representation of the tonal distribution of the pixels in an image.

2) PERFORMANCE ON STANDARD IMAGES
iEBMO is applied on four standard images namely Boy, Lena, Lifting body and Zebra. For evaluation purpose, we have used Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [40] and Naturalness Image Quality Evaluator (NIQE) [41] as these are no reference based image assessment metrics. The obtained results are compared with stateof-the-art methods, and the comparison results are provided in Table 4, Table 5 respectively. Here, lower values indicate the better result. All the state-of-the-art methods produce almost same results for all the four images.
We can observe from Table 4 that in case of Boy, iEBMO outperforms other methods. In case of Lena, it stands at sixth position. In case of Lifting Body and Zebra, it produces third best result and fourth best result respectively while evaluated using BRISQUE.
Observing Table 5 we can see that in case of Boy, iEBMO stands at third position. In case of Lena, it stands at second position. In case of Lifting Body, it produces fifth best result. VOLUME 8, 2020  It beats other methods in case of Zebra while evaluated using NIQE. Now, considering the overall performance, we can see that on an average basis iEBMO produces better results than other state-of-the-art methods. The average rank is calculated by assigning every method a particular rank based on their performance in four images, then these ranks are added and divided by 4. Based on the average rank, final rank is assigned to the methods. Based on these ranks, we can see that iEBMO outnumbers other methods while evaluated using no reference based image quality assessment methods.
A visual comparison of the state-of-the-art methods with iEBMO is given in Figure 10. The first column contains the input image, second, third, fourth, fifth, sixth, seventh, eighth, ninth, tenth and eleventh column shows the results of Refs. [4], [5], [8], [9], [27], [36]- [39] and iEBMO respectively. Table 6 provides the salient features of the state-of-the-art methods. The important observation we have can get from the obtained results is that the state-of-the-art methods are a bit complex than iEBMO as there are several parameters to tune, and yet iEBMO can produce a better solution than those in most of the cases.

3) ADDITIONAL TESTING
In addition, we have evaluated our method on two different types of datasets to prove its robustness and effectiveness. These are (i) High definition scene image dataset; MIT-Adobe FiveK dataset [13] and (ii) Document image datasets; H-DIBCO 2016 [14] and H-DIBCO 2018 [15]. The MIT-Adobe FiveK dataset contains 5000 photographs which were taken by SLR cameras. These photographs cover broad range of scenes, subjects and lighting conditions. The photos are in DNG format. The original images of MIT-Adobe FiveK dataset are converted to gray level images, which serve the purpose of ground truth in our experiments. The contrast of the GT images are then reduced to 10% which serve as the inputs of our method. The table 11 displays two sample images on MIT-Adobe FiveK dataset and their enhanced forms. The first column represents the input image, second column represents the output image and the last column represents the corresponding ground truth images. From 11, it is quite evident that the proposed iEBMO is able to enhance the contrast of the low contrast images quite well.
Besides this, the robustness of our method has also been shown through the evaluation of document images. Because, binarization of document images indicates the strength of VOLUME 8, 2020  the enhanced module more minutely as it needs pixel level clarity. Therefore, most of the binarization methods except an effective pre-processing that reduces uneven illumination and background variations to enhance the picture quality. We use two standard datasets H-DIBCO 2016 [14] and H-DIBCO 2018 [15], provided by ICDAR group. These datasets provide 20 ancient handwritten images, having various types of the degradation like uneven background illumination, black  patches, bleed through and fainted characters. ICDAR competition organizer also provides the GT binary images for these 10 handwritten pages.
To assess the quantitative improvement, we first binarize both the noisy and produced enhanced images using the winner method of DIBCO 2019 [42]. These two types of binarized images are then evaluated with four evaluation metrics, (i) F-Measure (FM) [43], (ii) pseudo-F Measure (Fps) [44], (iii) PSNR, and (iv) Distance Reciprocal Distortion (DRD) [45]. The quantitative comparison between before and after enhancement are tabulated in Table Table 7 and Table Table 8 for H-DIBCO 2016 and H-DIBCO 2018 datasets respectively. For each binarized image, the comparison is made with their corresponding GT image. From these tables, it is clear that iEBMO is able to improve the image quality significantly. Four sample images and their enhanced images along with their binarized version (both before and after the enhancement) are shown in Figure 12. The figure reveals that, there has been significant reduction of background illumination which can be visually  perceived over their binarized images. Therefore, we may safely claim that the proposed method has the ability to enhance the image quality for both the scene and document images.

VI. CONCLUSION
An image contrast enhancement approach is considered as an crucial pre-processing task of any image analysis system. In this paper, we have proposed an efficient but effective contrast enhancement technique based on the nature inspired BMO algorithm, known as iEBMO. The proposed method is tested on Kodak dataset and evaluated based on PSNR, SSIM and VIF values. It is also compared with nine state-ofthe-art methods to establish it's superiority. iEBMO is also applied on two MIT-Adobe FiveK images, that shows quite improvement on these high definition images. Nevertheless, to check the robustness of the method, it is evaluated on H-DIBCO 2016 and H-DIBCO 2018 datasets. The experiments on all the mentioned datasets have shown promising results. In future, we can utilize this algorithm in other image processing tasks such as image restoration, compression, segmentation, feature matching etc. It would be interesting to observe the results obtained from this method by applying it in the domain of color image processing and multi-resolution processing.
SHAMEEM AHMED is currently pursuing the bachelor's degree in computer science and engineering with Jadavpur University, Kolkata, India. His research interests include machine learning, optimization, and image processing.
KUSHAL KANTI GHOSH received the B.E. degree in computer science and engineering from Jadavpur University, Kolkata, India. His research interests include machine learning, optimization, game theory, and image processing. FRIEDHELM SCHWENKER (Member, IEEE) received the Diploma and Ph.D. degrees in mathematics and computer science from the University of Osnabruück. He is currently a Privatdozent with the Institute of Neural Information Processing, Ulm University. He (co-)edited 20 special issues and workshop proceedings published in international journals and publishing companies, and published more than 200 papers at international conferences and journals. His research interests include, but are not limited to, artificial neural networks, machine learning, statistical learning theory, data mining, pattern recognition, information fusion, affective computing. He served as the (Co-)Chair for the IAPR TC3 on Neural Networks and Computational Intelligence, and since 2016, he is the Chair of the new IAPR TC9 on Pattern Recognition in Human-Computer Interaction.