A Deep Probabilistic Sensing and Learning Model for Brain Tumor Classification With Fusion-Net and HFCMIK Segmentation

Goal: Implementation of an artificial intelli gence-based medical diagnosis tool for brain tumor classification, which is called the BTFSC-Net. Methods: Medical images are preprocessed using a hybrid probabilistic wiener filter (HPWF) The deep learning convolutional neural network (DLCNN) was utilized to fuse MRI and CT images with robust edge analysis (REA) properties, which are used to identify the slopes and edges of source images. Then, hybrid fuzzy c-means integrated k-means (HFCMIK) clustering is used to segment the disease affected region from the fused image. Further, hybrid features such as texture, colour, and low-level features are extracted from the fused image by using gray-level cooccurrence matrix (GLCM), redundant discrete wavelet transform (RDWT) descriptors. Finally, a deep learning based probabilistic neural network (DLPNN) is used to classify malignant and benign tumors. The BTFSC-Net attained 99.21% of segmentation accuracy and 99.46% of classification accuracy. Conclusions: The simulations showed that BTFSC-Net outperformed as compared to existing methods.

as it is an uncontrollable growth of a malignant brain cell. Brain tumor incidence has risen sharply in the last 30 years, impacting millions of deaths globally.
For example, 241037 deaths occurred in 2020 [4]. Thus, early diagnosis increases treatment options and survival chances [5]. Tumors may be treated with radiation, surgery, chemotherapy, or a combination of clinical methods [6]. Brain tumors are scanned to identify them, and the most frequently utilized brain imaging methods [7] are MRI and CT. Identifying a tumor from normal brain tissue is crucial and the ability to derive lesion properties from normal tissues helps diagnosis [8]. Lesion size, texture, shape, and placement vary based on person to person. To provide a diagnosis, images must be classified and categorized with tissue segmentation in manual, semi-automated, and completely automated forms. A radiologist manually segments it [9] and expert segmentation needs several enhanced datasets and pixel profiles to identify the damage border. Completely automated machine learning methods are preferable. So, naive bayes, decision, tree, random forest, and support vector machines (SVM) were used as glioma supervised classifiers. Traditional machine learning fails to segment and categorize.
Multimodal image fusion [10] and pixel-to-pixel fusion happen in the spatial domain. There is also a nonsubsampled shearlet transform domain (NSST) and a maximum and minimum for the fusion process by using weighted pixels [11]. Pixel action sets the weights in order to choose the most active pixels, where local laplacian energy and phase congruency are utilized [12]. Further, the original images are squared by utilizing a parameteradaptive pulse-coupled neural network with average weighting [13]. This image fusion method uses maximum average mutual information. Several works [14] present morphological component analysis-based convolutional sparsity (MCA-CS) for image fusion, in which the first MCA-CS component of each input image is multiplied independently and color, information, and brightness extraction may occur. Several methods are used to do the preprocessing operation. These include laws of texture energy measures (LTEM) [15], graph filter and sparse representation (GFSR) [16], dual-branch CNN (DB-CNN) [17], Gabor filtering based separable dictionary learning (GF-SDL) [18], and local difference in non-subsampled domain LDNSD [19], [20].
Wavelets operate poorly near edges and texturing areas where there is no phase information in the segmentation operation.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ 178 VOLUME 3, 2022 Further, improved U-Net [21] employed considerable VGG-16 based data augmentation to improve segmentation accuracy. A Dice-based loss function to predict output label using both local and contextual information extracted by using the triple intersecting U-Nets (TIU-Net) architecture in [22]. Cross-modality deep feature learning (CMDFL) [23] with a bat-optimized loss function was used to improve how well the segmentation worked. In addition, an efficient 3D residual neural network (ERV-Net) [24] and a multiencoder network (MENet) [25] were developed for better segmentation with low computational cost. Deep multi-task learning (DMTL) [26] with tiny kernels was also proposed to be done with the CNN encoder-decoder. Due to the small training dataset, MENet and DMTL used the autoencoder branch to guide and regularize the encoder component. In addition, a latent correlation representation learning (LCRL) based variational autoencoder with endpoint clustering is proposed in [27]. A few more segmentation approaches are fast level set-based CNN (FLS-CNN) [28], Bayesian fuzzy clustering with hybrid deep autoencoder (BFC-HDA) [29], and symmetric-driven adversarial network (SDAN) [30]. But the SDAN method suffers from reduced accuracy.
Further, multiscale CNN (MS-CNN) [31] and deep learning with synthetic data augmentation (DLSDA) [32] are proposed for brain tumor classification. But, the accuracy of tumor detection must be enhanced and classified to be identified. Moreover, fine-tuning-based transfer learning (FTTL) [33] is used to differentiate meningioma from non-meningioma brain images with a dice coefficient index [34]. The transfer learning CNN (TL-CNN) [35] model was used to classify brain tumors into three types. The tumor was expanded, then ringdivided and T1-weighted contrast-enhanced MRI was used to modify an existing pre-trained network [36]. This approach can automatically classify glioma brain tumors. The deep-CNN (DD-CNN) [37] model was developed for identifying brain tumors in MRI images. Here, 3D-CNN [38] was developed using MRI images of meningioma and pituitary tumors based on CNN-multi-scale analysis. Furthermore, generative adversarial networks based on variational autoencoders (GAN-VE) [39] and hybrid deep neural networks (HDNN) [40] for improved performance of the classification process. These traditional methods still need to be made better, which is why they don't work well enough for segmenting and classifying.
Problem statement: As the size of the dataset grows larger, conventional models suffer from high computational complexity. Further, the fusion, segmentation, and classification performance of conventional approaches needs to be enhanced. The novelty is that there is no common method for fusion, segmentation, feature extraction, and classification as of now. Furthermore, in the literature, these combinations of hybrid methods are not published. To overcome these problems, the novel contribution of the article is as follows: r A novel BTFSC-Net model is developed with the preprocessing, fusion, segmentation, and classification stages, which is a new design and has not been developed by other authors yet.  r Then, a DLCNN-based fusion network is used to fuse the preprocessed MRI and CT scans with the REA analysis, which improves the region of tumor.
r In addition, HFCMIK is used to segment the tumor region from the fused outcome, so an accurate area of brain tumor is detected.
r Finally, DLPNN is used to classify the benign and malignant tumors from the GLCM and RDWT trained features.
r The results of the simulation show that the suggested strategy worked better than the other options. The rest of the manuscript is organized as follows: Section II deals with the proposed BTFSC-Net analysis. Section III deals with results and discussions, along with performance comparisons. Section IV discusses the conclusion and future research.

II. MATERIALS AND METHODS
This section gives a detailed analysis of brain tumor fusion with segmentation and classification methods. Fig. 1 shows the workflow of the proposed BTFSC-Net methodology, and Table I presents the proposed algorithm. Prior to any further processing, medical images are subjected to a HPWF, which is used to eliminate noise from medical images. The DLCNN-based fusion network is being developed for the fusion of MRI and CT scans with maintaining REA capabilities. Here, REA is utilized here because it is necessary to detect the slopes and borders of the brain images in this case. In order to segregate the illness affected area from the fused image, HFCMIK clustering is applied. Furthermore, hybrid features are derived by combining GLCM, RDWT based statistical color features from the fused image. A DLPNN is utilized to categorize benign and malignant tumors.

A. Hybrid Probabilistic Wiener Filter-Based Pre-Processing
The HPWF successfully enhances the image by removing noise from images using Gaussian mask kernels. Table 2 shows the HPWF algorithm. Images are made up of pixels.
The image is broken into numerous groups. In one of those combinations, the pixel group is applied to HPWF, and the output pixel is an upgraded version of the original pixel. Various noise sources contaminate each image pixel. Consider the original image O ij with its ith row and jth column pixel values. Here is the generated noisy image X ij : Convolution with a mask is used to calculate the Gaussian standard deviation (σ GN ). Here, the convolution operation is performed between Xij and Mask, and generates σ GN . Further, the mask size is chosen based on standard noise threshold and the 3x3 and 5x5 kernels are selected. Then, the mean filtering operation using a local mask with a convolution operation between X ij and W ij is applied. In addition, the differenced image is estimated by performing a sliding window subtraction operation Moreover, the uniform distribution is performed by using variance properties using distributed outcomes. Finally, a  denoised image is generated using the average of the uniform distribution outcomes.

B. Fusion-Network
The proposed fusion approach can execute the fusion process among multiple modalities of images, such as combinations of CT and MRI scans, PET and MRI scans, and SPECT and MRI scans, by utilizing two distinct structures to accomplish the operation. The REA approach is used for conducting image decomposition. This removes the original slopes and edges of the source features, which enables better slope analysis. if the images are misaligned. In most cases, the slopes of medical features are piecewise smooth, and analysis is used to represent the edges of the image. When aligning these features, it is important to ensure that the edge positions match those on CT data. This demonstrates that the analysis attribute is dynamic and adapts itself depending on the medical features that are referenced to. An active slope analysis feature is the name given to this kind of image. The input feature graphs that were blurry are then removed from the input images that were represented using HPWF in order to generate images that are clear. In addition, the weighted coefficients are derived by the use of the Fusion-Net. After that, in order to arrive at the fused output, the HPWF and REA findings were combined utilizing the weighted mean fusion rule. Table 3 illustrates proposed fusion approach algorithm.

C. Hybrid Fuzzy C-Means Integrated K-Means Segmentation
Brain imaging segmentation is critical in tumor segmentation and analysis. Fig. 4 shows the planned HFCMIK tumor segmentation technique. The restrictions of traditional k-means clustering are solved and maximized by adaptive cluster index localization, which introduces the mean property of cluster centers and similarity matching. This adaptive clustering technique introduces adaptive k-means clustering (AKMC). Each cluster will be used in the fuzzy kernel c-means (FKCM) clustering method for successful brain tumor segmentation.
The HFCMIK algorithm consists of two steps. The AKMC method is used to choose initial centroids in the first phase. Initial centroids are therefore fixed for the whole operation. It reduces the number of iterations needed to group related items. Finally, the AKMC algorithm provides a local optimal by selecting a unique starting centroid. That leads to a global optimal AKMC algorithm solution. The second phase employs the Euclidean distance-based weighted FKCM technique. In both stages, weights associated with each attribute value range in the data set are processed. The upgraded weighted HFCMIK algorithm uses a weighted ranking technique. The weighted ranking method calculates the distance from the origin to each weighted attribute of a data item. The (8) calculates weighted where, W i denotes the weightage of X i attribute. For n data points, n numbers of U values are estimated, and the format is given in (9).
Then the sorted distances sort the weighted data points (Ui). These data points are then separated into k equal sets, where k is the number of clusters. The initial centroids in each set are the middle points or their mean value. This algorithm's initial centroids lead to cluster members' consistency. Because HFCMIK uses distance measures to choose group members, the proposed weighted ranking algorithm uses the distance formula to pick starting centroids. Thus, the suggested weighted ranking concept is useful for optimizing initial center selection. The unique initial centroid selection and the attribute value-based weights used for processing are two major elements of the first phase of the method. The HFCMIK is a clustered method that breaks an image into clusters. It uses centroids to symbolize its artificially constructed cluster and re-estimates the segmented output. Table 4 presents the HFCMIK algorithm for brain tumor segmentation.

D. Hybrid Feature Extraction
It was possible to extract certain features of brain tumor and then utilize those qualities as a basis for categorizing the different lesions. Several essential characteristics have been recovered that have helped in the distinction of brain tumors. These key characteristics include statistical color features, RDWT-based low-level features, and GLCM-based texture features, amongst others. The GLCM technique is a method for assessing textures that takes into consideration the spatial connection between the pixels that make up the brain tumor. The GLCM technique is used to characterize the texture of the brain tumor. This approach works by computing often repeated pixel pairs that have specific values and spectral relationships and then using that information to describe the texture of the brain tumor. Following the completion of the construction of the GLCM, statistical texture characteristics may be extracted from the GLCM using the procedure that was previously outlined. This probability metric establishes the possibility of a certain grey level being found near another grey level. The GLCM characteristics that have been employed include The next step, which is to extract the low-level characteristics, involves using RDWT with two levels. When RDWT is first applied to the segmented outcome, the output will initially consist of the HH1, HL1, LH1, and LL1 bands in that order. The LL band is then subjected to calculations for its correlation, energy, and entropy properties. After that, RDWT is used once again on the LL output band, which results in the output being LL2, LH2, HL2, and HH2 in that order correspondingly. On the LL2 band, the energy, entropy, and correlation characteristics are computed once more as shown in Fig. 5.
Finally, statistical color characteristics that are based on the standard deviation and mean are retrieved from the feature that has been segmented. They represent it as follows Standard After that, all of these features are merged together by means of array concatenation, which produces the output in the form of a hybrid feature matrix.

E. Deep Learning Based Probabilistic Neural Network Classification
Deep learning models have been more important in the feature extraction and classification process recently. The segmented pictures may be used by the DLPNN models to extract highly correlated detailed spatial, spectral, texture, and color information. The DLPNN models may also recognize the connections between distinct segments of segmented pictures' pixels and extract such connections as features.
Finally, the classification operation is carried out by the DLPNN models once they have been trained with these characteristics. In order to create more complex features at lower levels, DLPNN incorporates local characteristics from higher levels of input, making it one of the best possibilities for the classification process. By modifying the weights and kernel sizes in combination with the local connections, the DLPNN may also be made faster. Fig. 6 depicts the DLPNN model for feature extraction and classification. It provides a thorough examination of each layer, including information on the layer's dimension, filter size or kernel size, number of filters, and parameters. A DLPNN model is created by combining all the layers. The DLPNN performs the combined feature extraction and classification of brain malignancies.

III. RESULTS
This section provides the detailed results analysis and performance comparison with existing approaches using the BraTS-2020 dataset. Furthermore, evaluations of various objective performances and visual objective performances are also presented.

A. Simulation Environment
This work was simulated using the MatlabR2021a tool with a GPU processor. The models are trained under realistic conditions on the dataset. The simulations are conducted using an NVIDIA Tesla P100 processor equipped with the Windows 10 operating system. The 10-fold cross validation technique is implemented during the training of the proposed model. All the models are trained with MatlabR2021a with 10-fold cross validation through 1000 epochs and a 0.02 learning rate.

B. Dataset
The performance of the proposed network is evaluated using the BraTS-2020 dataset [40]. Multi-modal brain MRI investigations include 369 images for training, 125 images for validation, and 169 images for testing. T1-weighted (T1) contains 80 images, T1ce-weighted (T1ce) contains 80 images, T2-weighted (T2) contains 80 images, and Flair sequences contains 209 images in each study. The annotations for training studies are made available for online assessment and final segmentation competition, but not for validation and test trials.

C. Performance Evaluation of Proposed Fusion Process
The performance is compared by using entropy, structural similarity index metric (SSIM), mutual information (MI), standard deviation (STD), and peak signal to noise ratio (PSNR) metrics. Fig. 7 presents the visual performance comparison of the proposed MRI-CT fusion outcome with conventional approaches like MCA-CS [14], LTEM [15], DB-CNN [17], GF-SDL [18], and LDNSD [19]. Further, the proposed method resulted in higher contrast and brightness with an accurate tumor highlight as compared to other fusion algorithms. Furthermore, Table 5 presents the objective comparison of the proposed method with the existing methods. Finally, for all performance metrics, the proposed approach outperformed conventional approaches in terms of quantitative performance, as shown in Fig. 8.   [14], (d) LTEM [15], (e) DB-CNN [17], (f) GF-SDL [18], (g) LDNSD [19], (h) Proposed fused outcome.  Fig. 9 compares the proposed MRI-PET fusion result with the visual performance of standard methods such as MCA-CS [14], LTEM [15], DB-CNN [17], GF-SDL [18], and LDNSD [19]. In addition, the suggested approach produced a stronger contrast and brightness, together with an accurate highlight of the tumor, in comparison to the methods that were already in use. In addition, an objective comparison of the suggested approach with the same as other methods that already exist is shown in Table 6. The suggested strategy ended up producing better objective performance than traditional methods for each and every performance parameter, as can be seen in the graphical depiction shown in Fig. 10.

D. Performance Evaluation of Proposed Segmentation Approach
This section provides the detailed simulation analysis of the proposed HFCMIK segmentation approach with conventional  approaches. Fig. 12 shows the segmentation performance of various approaches on MR, CT, and fused images. The location of the tumor is not accurately identified using MRI segmentation. Further, CT image segmentation also resulted in poor localization of brain tumors. But the tumor location is perfectly identified by performing the segmentation operation on the MR and CT fused images. Fig. 11 shows the visual segmentation performance of the HFCMIK approach and conventional U-NET    [21], K-means, ERV-NET [24], and BFC-HDA [29] approaches. Fig. 11 shows the proposed HFCMIK segmentation approach has resulted in better localization of tumor areas in MR, CT, and fused images as compared to existing algorithms. All conventional approaches are improperly localizing the tumor area.

E. Performance Evaluation of Proposed Classification Approach
This section provides the detailed simulation analysis of the proposed DLPNN classification approach with conventional methods. This article compares the performance of various methods using multiple metrics called classification accuracy (CACC), classification sensitivity (CSEN), classification specificity (CPEC), classification precision (CPR), classification negative predictive value (CNPV), classification false positive rate (CPVR), classification false discovery rate (CFDR), classification false negative rate (CFNR), classification F1-score (CF1), and classification Matthew's correlation coefficient (CMCC). Table 8 shows the classification performance of the proposed DLPNN approach others methods like TL-CNN [35], TL-CNN [37], FTTL [33], and GAN-VE [39]. Table 8 shows that DLPNN outperformed as compared to existing methods, and the graphical representation of the table is presented in Fig. 13.

IV. DISCUSSION
As the size of the dataset grows larger, conventional models suffer from high computational complexity. Further, the fusion, segmentation, and classification performance of conventional approaches needs to be enhanced. In the literature, there are multiple deep learning models that have been published. But the complexity of those models is high. So, this work adopted custom deep learning models with unique combinations of layers (convolutional, MaxPooling, ReLU, flatten, SoftMax), number of filters, filter sizes, and stride factor. Finally, all these combinations make the proposed fusion classification model a novel architecture. Furthermore, there is no common method for fusion, segmentation, feature extraction, and classification as of now. Furthermore, in the literature, these combinations of hybrid methods are not published. Therefore, in this work, a novel BTFSC-Net model with the preprocessing, fusion, segmentation, and classification stages has been developed. Initially, HPWF was developed for the removal of various noises from MRI and CT medical images, which also enhanced the contrast, brightness, and color properties. Then, a DLCNN-based fusion network is used to fuse the preprocessed MRI and CT scans with the REA analysis, which improves the region of tumor. Initially, HPWF was developed for the removal of various noises from MRI and CT medical images, which also enhanced the contrast, brightness, and color properties. Then, a DLCNN-based fusion network is used to fuse the preprocessed MRI and CT scans with the REA analysis, which improves the region of tumor. In addition, HFCMIK is used to segment the tumor region from the fused outcome, so an accurate area of brain tumor is detected. Finally, DLPNN is used to classify the benign and malignant tumors from the GLCM and RDWT trained features. The results of the simulation show that the suggested strategy worked better than the other options.

V. CONCLUSION
This work implemented a hybrid fusion with segmentation and classification models, which is effectively useful for radiologists to localize the tumor location accurately. This method is also useful in hospitals for computer-aided classification of brain tumors. Initially, this work implemented the HPWF filtering approach for the removal of noise and preprocessing of the source images. In addition, DLCNN-based Fusion-Net was utilized to fuse both source images with multiple modalities. Then, HFCMIK-based advanced segmentation is used to localize the area of the brain tumor. Furthermore, hybrid features were generated from segmented images using GLCM and RDWT approaches. Finally, DLPNN was used to classify benign and malignant tumors by using the trained features. Finally, the simulation-based research findings proved that proposed fusion, segmentation, and classification approaches resulted in superior performance as compared to several conventional methods. Furthermore, the research findings proved that the proposed method can be adopted for real-time applications. Furthermore, this work can be extended with advanced optimization methods for detailed feature extraction.

A. Robust Edge Analysis
A REA approach is utilized to accomplish contemporaneous fusion of source medical features throughout the fusion process. The combining of MRI and CT images needs perfect fusion since misalignment is difficult to eradicate. The MRI is initially focalized to increase resolution, allowing for perfect feature fusion. Similarly, lowering the mismatch progressively allows for exact image fusion. These two steps are repeated until convergence is achieved. It also considers the inherent correlation of distinct bands, which was previously overlooked.
A unique active slope approach has been designed for total energy function optimization in these bands. The subproblems are effectively solved using REA with rapid iterations, and the slope extraction was performed via backtracking. Unlike typical variation approaches, REA has just one non-sensitive argument. The initial step in medical image fusion is input medical image fusion. Large medical images need reliable similarity assessment without increasing computing complexity. Thus, REA is used to keep data in space. In addition, any mismatch in the medical imaging would exacerbate the slope analysis. Thus, REA is used to determine similarity.
Here, E is the pixel energy and R represents the edge with slope levels. Since (8) reflects the energy cost of each pixel, a low resolution would result in a large shift, causing image overlapping. To avoid such a superficial solution, the slope extraction approach is used. This may successfully prevent iteratively changing an image in the incorrect direction. This algorithm's goal is to minimize the energy function, as shown in (18). Initially, the problem is fixed as follows for R: Finally, ξ is the optimized slope levels. Table 9 illustrates the proposed REA process.

B. Deep Learning Convolutional Neural Network Based Fusion Network
Deep learning is frequently employed in image segmentation, classification, and fusion applications. Fig. 14 shows a DLCNN-based Fusion-Net architecture for fusing several image modalities depending on features. The Fusion-Net architecture uses convolutional layers. Convolution layers are employed to generate precise features, and the primary purpose of these layers was to perform convolution among the feature patch and kernel-based weight fusion. Convolution layers may be found in neural networks. The convolution layer that is applied to MRI and CT features is represented by (19) and (20).
Here, W1 is the kernel matrix with its weight values being brought up to date, and B1 is the bias function that is based on the rectifier linear unit (ReLU). In comparison, the convolutional layer kernel only has 3x3 feature maps, whereas the first stage convolutional layer has 64 feature maps that are 16 x 16 in size. Equation (21) is a representation of the ReLU process.
In addition, the features that are generated by the convolutional layers are used as input for the MaxPooling layer, which then meticulously extracts the different types of features. The primary purpose of the MaxPooling layer is to identify the intra-and inter-dependencies that exist among the characteristics of MRI and CT scans.
Utilizing inter and intra dependencies allows for the retrieval of both the effect of CT on MRI as well as the impact of MRI on CT. The operation of the MaxPooling layer is shown by (22) and (23), which may be found below Here, F 3 and F 4 represent the feature maps of the results of the MaxPooling layer for the combined MRI and CT features. In addition, the second stage MaxPooling layers include 128 feature maps that have a size of 16 by 16, and the kernel in the MaxPooling layers has a size of 2 by 2, respectively.
In this case, F R refers to the fused feature maps that were produced by the concatenation layer, W 3 denotes the kernel matrix with its updated weight values, and B 3 refers to the ReLU-based bias function. Fig. 15 shows the receiver operating characteristic (ROC) curve of the proposed method, which is used to measure the true positive rate and false positive rate of the system. Furthermore, Area Under the ROC Curve (AUC) is also calculated to measure the two-dimensional classification performance of a system. The proposed system achieves an AUC of 0.94. Table 10 presents the feature values for various methods. Here, the GLCM method calculated the energy and entropy features.  Furthermore, the RDWT approach calculated the low-low (LL) and high-high (HH) features. Then, mean and standard deviation (STD) based colour features are calculated. Finally, all the features are combined, which forms the fused feature map. The feature ranking is carried out using a genetic algorithm-based optimal maximization (GAOM) procedure. The individual RDWT, GLCM, and colour features are applied to GAOM, which selects the best features through genetic algorithm-based mutations. Here, the GAOM operation also acts as a feature selection model in the fusion process. Table 11 presents the performance comparison of the proposed method without different feature extraction models. The second column contains the proposed model performance with and without GLCM feature extraction. The third column contains the proposed model performance with or without RDWT feature extraction. The fourth column contains the proposed model performance with and without colour feature extraction. The fifth column contains the proposed model performance with and without feature fusion and feature extraction. The Table 11 shows that the proposed method of performance with feature fusion has better performance than other combinations.