Competitive Improvement of the Time Complexity to Encode Fractal Image: By Applying Symmetric Central Pixel of the Block

By combining the basics of self-similarity, scaling correlation, and statistical components, Benoit Mandelbrot formulated the idea of a natural fractal entity, an entity described by those fundamentals. As a result of these principles, fractal image codings are being used in many substantial applications already, such as image compression, image signature, image watermarking, image attribute extraction, and even image texture segmentation. Thus, while fractal image coding is relatively new in the field of image encoding, it has gained broad acceptance at a rapid pace. In light of its beneficial qualities, such as quick decomposition, high compression ratio, and the independence of resolution at any size make these applications conceivable. However, compared to its advantages, fractal image coding is extremely time-complex and so remarkably expensive, which hinders its prevalence. A wide hunting domain blocks for the relevant range blocks caused this difficulty. We proposed several improvements to the Jacquin design in this paper. We first used max-pooling as an alternative for the medium bonding of spatial contractions to validate the value of the edge textures of the block. Secondly, we construct the odd-size pixel block alternative to an even-size pixel block for validation of the symmetric central pixel (CP). Finally, before the search started, we proposed a shortening of block space, using the central pixel of the block to convert each eight-bit pixel to a two-bit pixel. As a consequence, the symmetrical CP of odd pixels block, reduction of block space, and edge pixel selection accomplished faster coding and competitive image quality than existing known exhaustive search algorithms.


I. INTRODUCTION
Fractal geometry, according to [65], can emulate natural objects in a fake perspective rather than a real one. It was not possible to decode the image of a natural object before discovering the fractal geometry. Mandelbrot combined the Julia set in a single pattern by continuous iteration of a complex function until finally reaching an attractor and conceptualized three substantial ideas in a single frame The associate editor coordinating the review of this manuscript and approving it for publication was Senthil Kumar . here, first, the constitution of self-similarity of a natural object; second, the scaling dependence that opened the new door of dimensionality as a fractional dimension; finally, the statistical features of natural objects ( [1]). Immediately based on Mandelbrot's research, [45] presented his significant mathematical contribution concerning the existence and uniqueness of a attractor as well as iterative operators of contraction. Following this development, [5] claimed the reproduction of natural image as a fractal attractor by introducing Hutchinson's iterative operators. Barnsley named this process the Iterative Function System (IFS). An IFS coded image constitutes of two steps: First, fractal image encoding that generates IFS, and Second, fractal image decoding uses its collage theorem to reproduce almost even attractor based on a set of the best IFS fit. Nevertheless, Barnsley's claim has not been successfully applied in the reproduction of the original image in automated self-regulating fashion since natural objects are not strictly self-similar, but are partially or statistically self-similar. Following this, [48] proposed the Partition Iterative Function System (PIFS), becoming the first to claim that the similarity of natural objects could be parts to parts and not always wholes to parts. This research elicited a quick response in the image processing community. However, in Jacquin's method, the encoding time complexity was massive due to a large number of domain blocks searching for each range block. Thus, researchers began to focus on how to reduce encoding time. A panel of researchers attempted to reduce the encoding time by using elementary methods such as clustering domain blocks, classifying blocks by choosing geometrical or statistical features, and applying the Pixel approximation, Block partitioning, Hybrid technique, Adaptive search method, Entropy search method, Block size modification, Heuristic technique, Tree structure, Parallel implementation, Features based technique, No search method. However, following the Jacquin's Technique, [6] changed the search method for codebook to generate address data that displayed a Gaussian-like distribution and coded more effectively than a uniform (random) distribution by using an entropy coding method. Reference [6] took a 12×12 pixels domain block while range block size is 4 × 4 pixels. so every domain block consists of nine range blocks. Reference [72] applied the inner product to confirm the error metric of the library and target blocks. Even [49] showed classifications of blocks based on block geometry. Reference [35] classified the domain blocks into 72 categories with combinations of mean and variance and searched class wise instead of all. Reference [71] used triangular form blocks identified by its triplet of co-ordinate points where the matching blocks theory would have been the typical angle points.
In quadtree partitioning, [64] implemented variable range blocks by describing the reduction bit allocation as 0.29. Reference [53] presented a coder that translated a block of pixels into a position vector, for example, a 4 × 4 range block becomes a 16-dimensional position vector. Consequently, the coder takes the mean and variance of those position vectors, which works as the identity of the blocks and uses b-tree for indexing and finds the nearest domain blocks. Reference [73] first proposed to convert the gray valued pixels in blocks into a binary pattern locally in texture classification. References [85] and [102] omitted certain domains based on a few basic standards. Reference [90] portrays an enhanced formulation on the approximation of the nearest neighbor search grounded on orthogonal projection and pre-quantization of the fractal transmute parameters. Besides, to further boost the efficiency of the new algorithm, an optimized adaptive scheme for the estimated search parameter is extracted. Reference [74] transformed eight neighboring pixels into a binary arrangement with clockwise rotation shown in Figure 2. Then, the patterns with CP formed a particular block's identity, for more detail ( [76], [77], [86]).
On the other hand, we suggested transforming all pixels in the domain and range blocks into binary patterns at the final stage of this analysis. After that, to get the optimum distance between them, we applied the Hausdorff Metric to the binary value of the blocks. The overall works in this paper are to modify a series of stairs, the size of the block, the spatial contraction factor, and its mechanism and ultimately usages of the Central Pixel (CP) that converts block-wise each 8-bits pixel into a 2-bits. Integrating all modifications in a single frame, we tested a set of 36 standard commonly used images shown in Figure 4 and displayed the encode time of several sizes of images and blocks shown in Table 3 Table 8 with competitive image quality displayed in Table 5.
The paper is arranged as follows: in section II, we focused on literature review, in section III, we showed theoretical considerations; in section IV, we included proposed modification and its methodology on fractal coding; in Section V, we present proposed algorithm of the method, MPOBPBCPM; in Section VI, we kept detailing of the experimental results; in section VII, we present conclusions and further directions of improvement.

II. LITERATURE REVIEW AND ANALYSIS
After years of research, several Fractal Image Compression (FIC) and other General Image Processing Methods (GIMP) as well as Image Quality Evaluation Techniques (IQET) developed and we demonstrated in Table 1. It is important to discuss and review this literature.

A. FRACTAL IMAGE COMPRESSION (FIC)
We present a few approaches of FIC that has been developed into the current stage based on the input parameter and expected output.

1) MODIFICATION OF BLOCK DIMENSION
Reference [15] proposed the B × B and (3B − 1) × (3B − 1) range and domain block sizes, claiming that the encoding time decreased significantly when the finite-state vector quantization (VQ) approach combined with partition base fractal image coding with some adjustment and finally, encoded image with 0.19 bpp. For each n × n range block, [21] suggested the domain pool as 3n × 3n, 4n × 4n 5n × 5n or 6n × 6n, reducing the computational time needed for a brute force search to optimize image quality. Conci applied the local fractal dimension (FD) to subdivide the blocks into four classes. Reference [52] research on variable sizes of range and domain blocks in the fractal image coding to know the effect of variability. Reference [20] explain a method of dividing the original image into homogeneous blocks and achieving a significant speedup compared to the exhaustive search method.

2) AVERAGE POOLING METHOD AND POOLING FACTOR
The average pooling layer in Convolutional Neural Networks records the average pixel values in the template. ( [3]). It results in smoothing the pixel intensity of the block by averaging entire pixels in block shown in Figure 9. In Fractal Image compression, [46] used it with pooling factor of 1 2 . For the same work, [6] takes the spatial contraction factor of 1 3 with even size of pixels block. Reference [114] used Lena, Peppers, F16 and Boats images of size 256 × 256 pixels with contraction Factor of 1 2 . Reference [62] uses Lena image of size 256 × 256 pixels, range 8 × 8, pooling factor 1 2 .

3) BLOCK PARTITIONING
Reference [72] proposed fixed square block partition (FSBP). After that, [67] and [87] also use FSBP. These researches show that FSBP is the simplest of all square block partitioning schemes. In addition, [101] states that this type of block partition is efficient in transforming individual object block coding.

4) BLOCK CLASSIFICATION
Reference [34]  Reference [103] uses block variance and mean to classify image blocks, and so the range and domain blocks get their respective classes based on the mean value, and thus reduces the domain blocks searched for each range block. Wu contended that the proposed algorithm achieved the same reconstructed image quality as the exhaustive search and would considerably reduce the run-time required.
Reference [92] utilized the neighboring concept used to the improved vector quantization (VQ) method and named it as a spatial correlation. Truong denoted four types of range blocks as vertical, horizontal, and two diagonals and matched domain to range blocks with the corresponding direction by using MSE provided a threshold. So, it is full search with classified domain and range blocks. Truong's and Jacquin's Methods are almost similar except the classification of range blocks. Reference [105] improves Fisher's layout and achieved 576 classes by using mean pixel value and variance of blocks.
Reference [95] introduced a new two-step FIC acceleration algorithm. First, the condensed statistical vector expressions will speed up encoding twice as fast as the Baseline fractal compression (BFC) without the BFC resulting loss of image quality. Second, based on the fact that affine self-similarity is equal to the absolute value of Pearson's coefficient of correlation, a new block classification approach of compact arrangement sets is introduced to speed up coding process. Results of the experiment and theoretical study demonstrate that the suggested scheme achieves high performance in both maintaining image quality and encoding efficiency.
Reference [8] suggests a sub-image classification system that hierarchically partitions the domain pool and compares a range to only those domains that belong to the same hierarchical group as the range It is a Fisher's 24 model extension research. Reference [107] uses a primal-dual algorithm along with a median-based Fisher classification scheme to accelerate the encoding speed and thus achieves the robust fractal image coding. Reference [70] proposed a modified hierarchical classification strategy for fractal image coding using adaptive quadtree partitioning to improve the compression ratio of the lossless coding scheme.
Reference [78] presents a new way of mapping the domain and the range blocks based on peer adjacent and mean 5030 VOLUME 9, 2021 difference methods in which groups of four blocks share their union as a common peer-adjacent domain block. Only range blocks, R. in that domain (second hierarchy) are searched for a given domain block, D in the top Hierarchy. Each range in the second hierarchy will become a lower-hierarchy domain and will continue to map ranges in the same domain. This can reduce the number of computations, resulting in a much faster runtime. Smaller range size provides more computational time because more matching processes need to be calculated, thus saving only small storage space.
Reference [55] introduces an approach to estimate fractal texture identification's affine parameters to reduce computational complexity by splitting image into several blocks of various sizes. They used a variety of data chunks such as Fractal Pattern Chunk (FPC), Intermediate Raster Chunk (IRC) and more. Using these chunks, an image is stored in compressed form.

5) REDUCTION OF ISO-METRICS MAPPING IN NUMBERS
Reference [104] proposes a method by utilizing the best polynomial approximation to decide whether a block of a domain is fairly similar to the block of specified range. They use just 2 or 4 isometrics to speed up compression in the fractal code instead of 8 isometrics claiming that the probability distribution of 8 isometrics is not average.

6) TREE STRUCTURE
Reference [4] obtains quantitative results about the measure of distance employed in the search by deriving an incremental procedure for bounding the pixels in the domain row. In a tree structure, [4] organizes the domain blocks, and uses the scheme to guide the search. Reference [38] introduces the number and locations of the local extreme points in a row inside an image block known as the classification features composed of a three-layer tree classifier for the search distance of similarity matching to speed the encoding cycle preserving the accuracy of the reconstructed image.

7) FEATURES BASED TECHNIQUE
References [56], [57] suggested an effective method of zero contrast prediction to determine whether or not the contrast factor for a domain block is zero, and to measure the corresponding difference between the range block and the transformed domain block efficiently and precisely. Reference [29] proposed a classification scheme of three classes based on the edge properties of image blocks in the sense of self-similarity and claimed improvement using a class-based threshold. Reference [89] differentiated the blocks by using standard deviations. Reference [39] applies a fuzzy structure cataloger to sort image blocks and on the other hand, for the same purpose [93] serves Particle Swarm Optimization. Reference [62] use the fractal encoding approach for statistical loss analysis and construct a box-plot to identify the loss value distribution, then split it into several sections, and assign it to the specified model. Eventually, experimental results show the method's efficacy.

8) PARALLEL IMPLEMENTATION
Reference [91] states that the dihedral operations for assigning domain blocks require eight separate MSE computations between the specified range block and the chosen domain block, but these eight operations entail extensive computing in the frequency domain obtained by the use of the discrete cosine transform (DCT). Consequently, these DCT computations on a real-time processor are trivial, and make this new algorithm very realistic. Thus, parallel implementation is important for calculating the MSE using all the DCT coefficients of the eight orientations of a given image set. By converting into coordinates, Truong defines these eight orientations for a block of a domain. For each direction of the original block, Truong forms explicit DCT formulas. Using these formulas, the data under consideration can be grouped together and avoided the repeated calculations of the various operations and hence the computation time reduced. Given that the DCT has good energy compaction, only part of the DCT coefficients need to be used in realistic situations to find the scalar component. Thus, in the frequency domain, the coder will further reduce the computational difficulty in reducing least-square error. Reference [75] proposes the reduction of fractal encoder's time complexity, taking into account the data dependency between different operating modules, which effectively uses parallelism preserving the image quality.

9) NO SEARCH METHOD
Every time-consuming component [36] modifies is parameter search, error calculation, number of range blocks, and domain search for fractal coding and accelerated with new techniques as no search. Reference [36] splits the range block R into smaller sets and fixes the domain block position according to that of range block. By defining a concept as same degree, [36] tests the similarity between two blocks to induce the range block location and the corresponding domain block.

B. GENERAL IMAGE PROCESSING METHODS (GIPMS) 1) THE CP AND BLOCK DIMENSION
Reference [40] suggested Gray-Tone Spatial-Dependence Matrices and demonstrated on how resolution cells connected to the nearest neighbors in 3 × 3 and 4 × 4 pixels block (shown in Figure, 1). Reference [42] proposed a region descriptor named as center-symmetric local binary pattern (CS-LBP) descriptor which combines the strengths of the Scale Invariant Feature Transform (SIFT) descriptor [63] and LBP (Local Binary Pattern) [74] texture operator. Heikkila claimed the several advantages of the descriptor, such as tolerance to illumination changes, robustness on flat image areas, and computational efficiency. The Center Symmetric Local Binary Co-occurrence Pattern was suggested by [94], and the feature is extracted based on the diagonally symmetric elements around the center. Motivating on the concept of Center Symmetric and diagonally symmetric, we recommended odd size pixels for both blocks and spatial contraction size.

2) SPATIAL CONTRACTION METHOD
Reference [11] claims that max-pooling as a Spatial Contraction has become popular recently because of its better performance and is suitable for sparse images. Reference [51] experiences that max-pooling works better than average pooling (not yet used in FIC) for computer vision tasks such as image classification. [112] displays both cases of even and odd pooling sizes based on the location information, turning it into displacement function and finds better results for max-pooling.

3) BLOCK BINARIZATION USING CENTRAL PIXEL
Reference [73] first proposed to convert the gray valued pixels in blocks into a binary pattern locally in texture classification. Reference [74] transformed eight neighboring pixels into binary patterns keeping the central pixel intact and finally made a pair between the central pixel and combining of other corresponding binary 8-neighboring pixels together, and thus, they formed a particular block's identity to recognize block in texture shown in Figure 2. References [77] and [76] use this idea in facial recognition. Reference [86] suggests an approach to image compression based on Local Binary Pattern (LBP) as a dispersion of local contrast by binarizing each neighbor of 3 × 3 matrix using the central pixel value as a threshold, resulting in an 8-bit binary code along with the initial value of the central pixel altogether a local block descriptor (BD) stored in a newly proposed Local Binary Compressed format. Szoke shows the results of improvement in compression ratio.

C. IMAGE QUALITY EVALUATION TECHNIQUES (IQETs)
There are two classes of IQETs, Subjective, and Objective. Besides, at the time of Fractal image encoding, bits allocation is also a vital part. We need to review Objective IQETs and bits allocation in this section.

1) OBJECTIVE IMAGE QUALITY EVALUATION TECHNIQUES
Reference [115] claims that the deterioration of decoded images in fractal coding derives mainly from the poor self-similarity of the input image induced by the collage errors in the encoding process, thereby producing a changed image that satisfies good self-similarity and well approximates the original image. Zou then suggested a procedure for optimizing the number of range blocks utilizing collage error. Reference [108] states that one can categorize IQETs as methods of full-reference (FR), no-reference (NR), and reduced-reference (RR). For example, the mean squared error (MSE), the peak signal-to-noise ratio (PSNR), and the Structural Similarity Index (SSIM) are FR methods. Reference [108] claims that No Reference SSIM (NSSIM) scores an image quality without previous training or learning and can automatically calculate image quality by measuring image luminance, contrast, structure, and blurriness. WPSNR is a reflection of PSNR metric generalization that calculates the perceptually weighted distorted parameter of a block using the Lagrange clockwise bit-allocation method in the encoder. WPSNR explains the visual benefit due to the results of two subjective standardized tests [30].
During the decoding process, encoded IFS-data extracted from the image was decoded into a fractal image by the decoder. Therefore, for every corresponding original image (reference), we need a group of numerical measurements of fractal image quality with the correct tools. Reference [32] grouped numeric measures of image quality according to their findings. However, we kept the Image Quality Evaluation Techniques (IQETs) into a single set comprising seventeen measures. These are as follows: The Mean Square Error (MSE) confirmed a pixel intensity error. An image signal test compared to its Noise is tested by SNR and PSNR. Further, Weighted Peak Signal to Noise Ratio (WPSNR) ( [30], [61]) was used to validate PSNR. We applied the Structural Similarity Index Metric (SSIM) [100] to determine the similarity of structure. Besides, the Edge Strength Similarity Index Metric (ESSIM)( [19]), and ( [111]), Multi-Scale Structure Similarity Index (MSSSIM) [100], Spectral Similarity Index (SRSIM) [109], and No Reference Structural Similarity Index Metric (NSSIM) [108] were used to certify SSIM values. To measure the similarity of feature information, we used the Features Similarity Index Metric (FSIM) [110]. We apply Contrast Per Pixel Variance (CPPD) [17] and Entropy Difference (ED) [97] to Contrast and Luminance tests. GMSD tool tested the Gradient Magnitude Similarity Deviation. Finally, the tools of Sparse Feature Fidelity (SFF) [16] and Image Fidelity Measure (IFM) ( [31], and [32]) tested the fractal decoded image to justify the quality. In Table 5, we showed all numerical test results. We tried all of these seventeen quality tools to evaluate the decoded fractal image and got a set of competitive results.

2) BITS ALLOCATION TECHNIQUES (BATS)
During fractal image encoding, bits per pixel (bpp) is very significant. Original and fractal decoded image need to be confirmed the bits allocation. It varies on research needs, for example, Jacquin used 6 bpp in original image while Fisher used 8 bpp. It signifies the compression ratio. [22], [24], and [89]

D. GENERAL SUMMARY AND COMMENTS ON LITERATURE REVIEW
A fractal image encoding suffers from the optimal and competitive balance between encoding time and image quality in the spatial domain with even size pixel block and contraction factor, the key results from the above literature review. The possible explanations for this are the block dimensionality, contraction factors, contraction methods, time, and space required between two blocks for error findings in fullsearch. As a result, If we look into Table 9, it shows the average of PSNR, bpp, and ET(s) with respective image and block sizes of proposed methods. On the other hand, most recent research in the literature review shown in Tables 6, and 7 shows encoding time is a bit expensive compare to PSNR and bpp. All the results taken from the literature review are influential for future expansion to compress the images using fractal theory. The writers discussed a solution to the issues in Section IV.

III. THEORETICAL CONSIDERATIONS
In this section, we present the mathematical ideas behind fractal theory. For more details, see [2], [33], [37]. (1) Since I M is an image vector so it has spatial coordinate and pixel intensity. If we introduce some other extra properties here to make it a Hausdorff space, we need to confirm that no open set in the space (I M , f ) has common elements. Let P and Q be two compact sets in I M such that Hence if p x ,q y ∈ (I M ,f ) then ∃ p x ∈ P and q y ∈ Q, p x = q y . The distance between the two sets P and Q in space (I M ,f ) can be calculated by using the Hausdorff metric such as, References [104], and [7] state that an object to be considered by IFS codes, three important steps are essential that can be written mathematically, Firstly, average Pooling, the process of smoothing is actually averaging of pixel intensity, I x,y block by block, where x and y are the pixel coordinates for square block and I x,y = f (x, y), and κ × κ average pooling. For details, reader can see [12], and [13] Secondly, block intensity transformation, which can be found by using where s and o are the contrast and brightness respectively. Finally, geometric transformation by using T g (I x,y ) = a g I x,y + b g (6) where T g is the transformation function and a g and b g are the coefficients of isometric transformation and translation vectors respectively, while g = 1, 2, . . . 8, is a translation index. Thus, [7], [104] shows an overall transformation as follows: Now, if we apply this transformation T overall iteratively on all domain blocks separately, and combine them in a single plot, no doubt we can then reach a fixed point according to the Banach contraction mapping principle. This collection of all transformed domain blocks matched with the corresponding range blocks can be used to build a decoded image on the theory of collage. Mathematically, Hence using Eq (3) So now, with the help of Eq (9) and Eq (10), we combine and establish the following: Finally, after combining Eq (8) and Eq (11), we recognize

IV. PROPOSED MODIFICATION ON FRACTAL CODING METHOD
In this section, the entire process of fractal coding based on the proposed modifications are discussed as the Encoding and Decoding Process diagram shown in Figures 3, and 14.

A. BLOCKS PREPARATION
First, we chose the standard Zelda gray-scale image (ID-32 of Set-A), and then we resized it as shown in Figures 8, and 7, where range and domain blocks are Center and diagonally symmetric. Reference [10] mentions the thinning algorithms yields suitable result in, considering 8-neighbors around any random pixel regarded as a central pixel. Apart from, [79] states that researchers mainly uses the thinning algorithms  to extract significant features, layer by layer from a digital image. Following this, we then looked at p c = P(x + 1, y + 1) as a central pixel of eight neighbors of any block in a pixel coordinate structure for the purpose of fractal image coding. That is, for any pixel, p c has two upright and two horizontal neighbors as N 4 (p) and four diagonal neighbors as N D (p) as shown in Figure 6. In an image block, the one CP, and the eight neighbors around CP with pixels intensity [0, 2 b − 1 | b = 1, 2, . . . 8] of each represents intensity of any position with in the block. In spatial coordinate, their position vectors are follows: N 4 (p) : (x, y + 1), (x + 2, y + 1), (x + 1, y), (x + 1, y + 2) N D (p) : (x, y), (x + 2, y + 2), (x + 2, y), (x, y + 2).
Subsequently, the 8-neighbors as N 4 (p) ∪ N D (p) = N 8 (p). A spatial function in the image plane relates pixel intensity and position. Mathematically, the central pixel, p c , [0, 2 b − 1] = f (x + 1, y + 1), helps to binarize the block intensity using the intensity of as threshold. We made two copies of resized Zelda standard image, and partitioned both as range and domain blocks of size α × β and (ps × α) × (ps × β) respectively, as shown in Figures 7 and 8, where α = β = 2d + 1 and ps = 2p + 1. We applied 3 × 3 pixels as the size of the range block to get the central pixel of each block, d = 1, 2, 3, 4, and p = 1, 2, 3, 4 depending on image size. Algorithm 1 supports the code to prepare initial blocks. Besides that we require five more steps to process the blocks according to Algorithms 2, 3, 4, 5, and 6

Construct R(new)
Apply the value of g q to each block to construct new range blocks, R(new) = R(old) − g q × ones(size(R))   the maximum for every function map, Mathematically, Reference [11] claims that max-pooling has become popular recently because of its better performance and is suitable for sparse images. All the comprehensive research on Jacquin's fractal transform code, however, used average pooling when we first thought about moving to Max-pooling by encouraging its benefits. We used the Algorithm, 2 to get spatial contraction in domain block.

2) SIZE OF MAX-POOLING
Reference [112] displays both cases of even and odd pooling sizes based on the location information, turning it into displacement function. Zheng knows, in his study, that in even cases all units have displacements in both horizontal and vertical directions. However, in the odd case, Zheng found that the translation would be zero in the horizontal and vertical directions, respectively, if the maximum value would contain in the central row or column. What Zheng has mentioned is nothing more than symmetric pixels and their consistency. Eventually, the odd pooling size returns  image data with a peak centered around the central pixel, where even pooling size will not be regarded as consistent when the researcher needs to maintain the peak positions during pooling. So, if researchers think of convolution as an interpolation between the given pixels and a center pixel, by pooling even sizes, they can not interpolate to a center pixel. For an odd-sized pooling in CNN (ConvNet), each of the previous layer pixels will be symmetrically around the output pixel. In the Fractal Image Coding system, the Average Pooling size has been an even in size as 2 since the time the baseline method has started. However, In our study, we used an odd in size as ps = 2p + 1, and reduced [ps × (2d + 1)] × [ps × (2d + 1)] block size to [2d + 1] × [2d + 1] block size by using max-pooling, for illustration, every 3 × 3 pixels block condenses to 1 × 1, that means the spatial contraction factor is 1 3 rather than 1 2 , which is commonly used by all 5036 VOLUME 9, 2021 researchers. We need to switch to transform the blocks as follows after the contraction has done. Pooling size (ps) as a contraction factor, one can use in the Algorithm, 2. Besides, some more theoretical ideas about how to build an automated self-regulating system need to demonstrate as follows;

C. DOMAIN BLOCK's TRANSFORMATIONS
The overall transformation expressed in (7) can be shown in the form of automated self-regulating system by using the (14): where (x,y) is the arbitrary spatial coordinates of pixels, and z is the pixel intensity for corresponding spatial coordinates. The parameters a, b, c, ρ are responsible for geometric transformations, s for contrast adaption, e and f are the translation vectors for luminance adjustment, and the Figures 12 and 10 show the processes, separately for the single domain as well as full image respectively.

1) GEOMETRIC TRANSFORMATION
To transform domain blocks, one can use eight Isometric-Transformations (ITs) based on the alternate form of (14). The Equations (15), and (16) are the vector operators to get transformed blocks and implemented by using Algorithm 3. Here According to [7], and [104], the (14) should have at least three properties as discussed in III-A. In this section, we addressed only geometric aspect by the following equations, (18), and (19) by using (15), and (16); and The magnified transformed domain block results in according to Figure 12.

2) IMAGE CONTRAST AND BRIGHTNESS ADJUSTMENT
From (14), the contrast, s and brightness, g of a gray-scale image can be calculated by introducing the third dimension, z in the spatial coordinates. The (14) can be rearranged as follows;

VOLUME 9, 2021
It is clear here we are having an equation for depicting the intensity function, where, the variable z and T (z) are the domain and range block intensity respectively. We can estimate a linear regression linê T (z) =ŝ q z + g q (22) and thus we get the following error, where z = f (x, y), however, to optimize the error in (23), one can take partial derivatives with respect to g q andŝ q and solve the equation to getŝ q , g q and where λ is the number of pixels in each block and the minimum error in Equation 23 will support to match between the blocks pair as D 2d+1 , and R 2d+1 , where the pixel intensity of range block is as follows: Algorithms 4 and 5 are expected to bring this theoretical aspect into effect.

D. FINDING CENTRAL PIXEL OF EACH BLOCK
First, we must find the central pixel for each block during the block feed process that was achieved by the function, where, α = β = 2d +1, where d = [1, 2, 3, 4]. Algorithms 7 suggests executing this approach.

E. THE BLOCK PIXEL BINARIZING PRINCIPLE
Reference [66] notes that since the optical character analysis, identification, and classification of natural images involves a prior image binarization, the implementation of classical global thresholding methods in such a situation makes it difficult to maintain the visibility of all characters, and claims that regional binarization is thus substantially important. Eventually, they states that the image binarization is one of the most important preprocessing measures contributing to a substantial decrease in the quantity of information sent for further analysis and to an improvement in its speed. Following this, we set a function to convert the intensity values of pixels in each individual block to logical binary digits using the following function.
( Read: New domain and Range blocks from algorithms 4, and 6 3: if Block = Domain block then 4: Find CPs, C p (D) = B( α+1 2 , β+1 2 ) central pixel of the block, where α = β = 2d + 1 5: else 6: Find CPs, C p (R) = B{ α+1 2 , β+1 2 } CPs, C p (R) in (28)  where I x ×y = Local Pixel Intensity of the blocks in N 8 (p), and inclusive p c , the Central Pixel and x × y is the array of pixels in block. Algorithm 8 recommends this strategy to be executed.

F. DEMONSTRATION OF BLOCK INTENSITY SHIFT USING CENTRAL PIXEL
Ψ B α×β is a logical binary-valued α × β order matrix, and it is formed by using (27), and (28)

G. HUNTING DOMAIN BLOCKS FOR CORRESPONDING RANGE BLOCKS
When the algorithm 9 begins to hunt domain blocks for corresponding range blocks, it changes the pixel intensity of both blocks into binary values and then calculates the optimal error between the domain and range blocks. This is performed by employing the most favorable values of o and s. Thus, we have a collection of the most advantageous domain blocks for every corresponding range block as shown in Figure 13. Read: New domain and range blocks, and corresponding CPs, from algorithms 4, 6, and 7 3: if Block set = Domain block then 4: Binarize Block s Pixel of Domain, shown in (28), and demonstrated with Example in 29 5: ε is any suitable standard error 6: preserve, IFS − Data the corresponding IFS − Data, x ifs , y ifs ,ŝ q , g q and t ifs 7: else 8: go to new search 9: Write: A complete set of IFS-Data [x ifs , y ifs ,ŝ q , g q , t ifs ] 10: return FIGURE 13. Blocks mapping: using Hausdorff Metric.

V. ALGORITHM OF THE METHOD, MPOBPBCPM
We addressed the following algorithms for the proposed method, and we achieved a series of results by running a sequence of codes using Matlab programming through these Algorithms addressed in Section V. Read AI , Any Arbitrary Image (AI) with the same size M × M of the original image, shown in Figure 11 3:

5:
Prepare, D p initial preparation of Domain block from dummy image 6: Transform, x ifs , y ifs Spatial pixel coordinate into D p

7:
Transform,ŝ q intensity Block-wise intensity transform into each D p 8:

Add, g q
Vector addition of offset into D p 9: Apply, t ifs ISO-metric transforms to build each D p , finally 10: if input = [x ifs , y ifs ,ŝ q , g q , t ifs ] then 11: calculate R q =ŝ q * D p + g q * ones(size(D p )). IFS-data set Provides all the information 12: else 13:

VI. DETAILING OF THE EXPERIMENTAL RESULTS
The writers have demonstrated the encoding speed of the proposed algorithm with the objective quality assessments of the decoded image in this section. In the experiment, we used all sixty-six images from set A and B in Figure 4, and 5.

A. THE EXPERIMENTAL RESULTS ON ENCODING TIME
We have exhibited the encoding time of thirty six different sizes of images with several odd pixels blocks in Figures 15. Table 3 shows the encoding speed, while Figure 16 represents the box-plot of processing time. The improvement of the process time complexity of the proposed method proves the efficiency and overall effectiveness of the scheme through Table 8. The Result of Encoding time of Set B images displayed in Table 4.

B. THE EXPERIMENTAL RESULTS ON OBJECTIVES QUALITY MEASURES
The Algorithm 10 of decoder generated the fractal images of Figure 20 using the corresponding IFS-Data with competitive  values of objective image quality shown in Table 5. The competitive results indicate the efficiency of the Algorithms of the proposed method.

C. ANALYSIS, COMPARISONS, AND DISCUSSIONS
Thirty-six images were used by authors, and each has three sizes and two block groups shown in Table 2, where we view comparable image sizes, total blocks, and percentage difference in total-blocks. The Figure 15 shows the compatibility of image size with the block size by displaying the nature of fluctuation of encoding time. For instance, 135 − 5 and   270 − 9 are more compatible than 540 − 9. Figure 16 shows that 270 − 5 attains minimum interquartile range (IQR) with three outliers while 135−5 has no outlier and 270−9 has two, which are very close to max value. Thus, we can conclude to reprocess pixel optimization further. Table 3 displays the raw data of the encoding time of images of different sizes. Table 3 shows 0.040 (s) and 0.829 (s) respectively minimum and maximum average time for 135 − 5, and 540 − 5, and depicts encoding times (ET) of thirty-six images, where Lena image   (Image ID-18) of different sizes are comparable initially with the methods proposed by [4], [34], [44], [89], and [43] and according to Table 8, we find definite improvement according to the size declared in Table 2. Table 6 represents Encoding VOLUME 9, 2021  Table 5. Figure 17 displays ET (s) versus PSNR (dB) plot, and the graph shows a linear fit with a negative correlation, and it proves that better optimization in pixels between Fractal and original images effects time complexity. Figure 18 shows ET (s) versus CPPD, NLSE, and ED plots, where we use linear, spline, and kernel fits. A simple linear fit is a model of a relationship between two continuous variables, while Spline fit is a smoothing spline that varies in smoothness according to the lambda value, as we choose 0.01. The smoothing spline can help to see the expected value of the distribution of dependent across independent. On the other hand, the Kernel Smoother option produces a curve formed by repeatedly finding a locally weighted fit of a simple curve at sampled points in the domain. Implying this method, we can observe the relationship between variables and determine the type of analysis or fit to perform [50]. For both prediction efficiency and model complexity, the mean deviation similarity index (MDSI) metric indicates a competitive balance. At the same time, MDSI(M) is effective, active, and reliable. It performs reliably, for both natural and synthetic images [69]. The Figure 19 represents   Mean Deviation Similarity Index Metric (MDSIM). The negative correlation indicates an inverse relation. For further comparison, Table 9 shows very recent methods for ET (s), PSNR, and bpp. We have compared four models of images to compare all mention methods in Table 9. In comparison to bpp and ET (s) of the proposed method are superior, and PSNR is competitive where the average of IFM, SFF, ESSIM, and NSSIM is 0.99 that indicates image quality is quite good. Table 8 reflects the encoding time percentage change where the minimum boost is approximately 10.71% compared to [43]. Hu utilizes a 4FFT algorithm for image compression coding scheme based on Fourier transform energy concentration and results in 0.280(s) shown in Table 8. On the other hand, [113] works on Lena image with result of 0.38 (s) and the average encoding time of four images is 0.52(s), while proposed method achieves in 0.078, and 0.073. Reference [113] removed the most inappropriate domain blocks according to each range block to decrease the search space. Before the best matching search, Zhou optimized the mapping error by adjusting the mapping scheme for the sub-blocks based on an image feature while we proposed max pooling, odd pixel block, symmetric central pixel, and binarization of block intensity before the best-matching search and which result in competitive outcomes. Reference [107] uses four types of images, Pirate, Boat, Peppers, and Living-room with size 512 × 512 and blocks size 8 × 8 results in SSIM 0.89. Reference [55] shows the SSIM, and FSIM values in average, respectively: 0.959, and 0.984. The proposed method achieved the same features resulted in 0.84 and 0.89 average of size 540−5. Besides, the values of MSSIM, NSSIM, SRSIM, and ESSIM are 0.92004, 0.99211, 0.93837, and 0.99211 respectively. These values suggest that the decoded fractal images are outstanding in terms of features, and structures. Image size less than 243 × 243 pixels, MSSIM shows the value of −∞.

VII. CONCLUSIONS AND FURTHER DIRECTIONS OF IMPROVEMENT
The author lifts this scheme to a point where it is substantially comparable to the standard fixed-size block coding method based on full search, after successfully developing the process with multiple changes. By operating on IS0 standard image sets A and B, the author's coding scheme compresses to 0.1513 bits per pixel on an average with a compression ratio of 53 that is still suitable for preserving the image quality shown in Table 7. Unlike the method of Jacquin comprising an exhaustive search strategy with simplified transformation classes, our effortless search procedure requires no domain classes. However, in the improvement and implementation of the research on extensive domain search, the author believes that a great deal of work needs to explore to produce even better outcomes, considering the success of this research. For example, [81] used Lena, Peppers, Cameraman (C-man) of size 256 × 256 pixels with block-size 4 × 4 pixels and results in 0.049(s), while the proposed method achieve it by 0.167 (s). Thus, the author's proposed algorithm will increase its efficiency in combination with other rapid fractal algorithms.