Cervical Cancer Diagnosis Using Intelligent Living Behavior of Artificial Jellyfish Optimized With Artificial Neural Network

Cervical cancer affects nearly 4% of the women across the globe and leads to mortality if not treated in early stage. A few decades before, the mortality rate was too high when compared to the present statistics. This is achieved as nowadays most of women are aware of this disease and undergo health examination mainly for screening cervical cancer on regular basis. But only the accurate diagnosis can be helpful for further treatment. Many works are carried out for accurate diagnosis and always have some limitations in accurate predictions. In this work, an efficient algorithm is proposed for the accurate diagnosis of cervical cancer. A meta-heuristic called artificial Jellyfish search optimizer (JS) algorithm is combined with artificial neural network (ANN) to tackle this problem. The proposed algorithm is called JellyfishSearch_ANN and is employed to classify the cervical cancer dataset with four type of targets based on the examination. The JellyfishSearch_ANN provides outstanding results among other classifiers taken for comparison and mainly its classification accuracy is found to be above 98.87% for all targets.


I. INTRODUCTION
One among the gynecological cancer is the cervical cancer which occurs in the reproductive organ of the women. Its different sorts are ovarian cancer, uterine cancers, vaginal cancers and others. The lower part of uterus that ends with vagina is the cervix. When the DNA mutations takes place in the healthy cells of cervix, this mostly ends with cancer in cervix. There is a possibility of spreading cancer to various organs that are nearby such as vagina, lungs, lever and others. Mostly middle aged women are highly prone to this disease. It is of various types namely adenocarcinoma, squamous cell carcinoma, and adenosquamous carcinoma. These may The associate editor coordinating the review of this manuscript and approving it for publication was Shuihua Wang . spread in four different stages as stage1, stage2, stage3 and stage4. This disease is cause due to the sexual transmission of human papillomavirus(HPV) from an infected person to a healthy [1]. As the symptoms are hard to diagnose at the early stage, hence most of the women become a victim for this disease.
Every female should proceed with a normal checkup at a right period of time but the symptoms are hard to analyse. This may be averted by a means of PAP smear screening technique and with the help of getting HPV vaccine. This cancer may be prevented by a correct analysis with proper assistance. This is one of the preventable and treatable if identified early and handled effectively [2], [3]. The proposed approach, JellyfishSearch_ANN handles this purpose. VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ A number of works carried out on cervical cancer are mentioned below. In [4], the strategies which include filters and wrappers have been used to choose the attributes from the dataset utilized in that work. The lacking and imbalance information are dealt with the use of oversampling, under sampling and other sampling strategies. Among the following classifiers like logistic regression, support vector machine (SVM), neural networks (NN), the Decision tree classifier has yielded 97.5%. The attributes which include patient_age, smoking, hormonal_contraception, first_sexual_intercourse, pregnancy_numbers, and STDs are considered as relevant features [4]. The PAP_smear images of three datasets are classified with fuzzy-cmeans algorithm. The simulated annealing collectively with wrapper filter is applied for feature selection. The maximum accuracy produced for single_cell image dataset is 98.88% [5].
The SVM is utilized in [6], to find the existing condition or stage of the person affected with cervical cancer, images of Magnetic resonance were used. An ensemble classifier [7] is developed by combining the techniques such as linear discriminant, support vector machine, K-nearest neighbor, Boosted and bagging trees. This was considered to be a computer assisted screening system and it was applied for SIPakMed dataset. It is found that it have furnished an accuracy of 98.27% on classifying unhealthy cells from healthy cells. The Long-short-time memory Recurrent neural network (RNN) is optimized using meta-heuristic techniques which include cuckoo_search, genetic algorithm, gravitational search, particle_swarm optimization and gray wolf optimizer. The test is carried out for 668 times and 34 attributes of dataset [11] after preprocessing and a maximum of 96% accuracy is found to be achieved [8].
The deep convolution neural network(CNN) approach is employed classifying the 4 targets of the cervical cancer dataset and it is inferred that the accuracy above 90% is obtained for all the targets [9]. The Gauss Newton representation-based formula (GNRBA) [10] is employed for classifying the cervical cancer data with four targets. This approach has produced 95.35% of accuracy for Hinselmann target variable.
The transfer learning models such as VGG-19, ResNet-50, InceptionV3, AlexNet and SqueezeNet were used in [12] to detect the non-cancerous and cancerous cells in cervix. The preprocessed Pap smear images were used to analyze the structural completeness of these cervical cells. Among all these pretrained models, SqueezeNet has produced the best validation accuracy as 96.9%. In [13], the ResNet-50 model was employed to recognize the Pap smear images to classify the cervical cells as cancerous cells with three classes (superficial squamous, columnar epithelial and intermediate squamous) and non-cancerous cells with four classes (mild, moderate, severe dysplasia and carcinoma -in -situ), using the Pap smear images. In this work, the ResNet has yielded 74.04% accuracy when compared with its competitor CNN whose accuracy was only 44%.
The methods such as Random forest, Decision tree, Logistic regression and Ada boost were applied to the dataset [11], also by including a weighted version [14]. The performance of each model is assessed in terms of both weighted and unweighted version of the dataset. The maximum accuracy observed from this work is below 86%. The Random forest classifier in a combination with a recursive feature elimination technique was applied to the dataset [11] in this work [15]. Along with this, other classifiers such as Naïve Bayes, KNN, SVM and Decision trees were also used. The Performance is accessed using cross validation techniques and it is obtained that Random forest algorithm has produced a highest of 93% accuracy than all others.
The cervical cancer classification was conducted using a total of 915 images with four classes namely normal, adenocarcinoma, precancer and squamous cell cancer. These images were preprocessed and classified using the Effe-cientNetBD, a pretrained model, which was found to be 94.5% accurate in testing [16]. The dataset was built by using the measurements from the optical measurements in [17]. The machine learning algorithms such as eXtreme Gradient Boosting, Naïve Bayes, Random forest and Convolution neural network were used along with optoelectronic sensor to predict cervical cancer at the early stage. For this optically derived dataset, the accuracy yielded by these machine learning algorithms were observed to produce about 95%.
From the literature, it is found that a few works are carried out so far on the diagnosis of cervical cancer. Over all, it is noted that there are few limitations in achieving the better accuracy and have not focused on lesser computation time. To focus on both, that is to increase the cervical cancer classification accuracy and decrease the computation time, a classifier namely JellyfishSearch_ANN (JSA-ANN) is proposed. The ANN algorithm is firstly combined with it and applied for the discrimination of cervical cancer samples from the normal ones. The Jellyfish optimizer is chosen for this work based on its simpler calculation and quicker response than the other optimizers. The efficiency of this algorithm is compared with that of other similar hybrids and the details are discussed in the following sections. This paper is structured as follows: a few related works in the literature are discussed above in the introduction section. Section 2 describes about the materials and methods deployed in this work. The experimental settings are elaborated in section 3. The analysis of the results of the proposed work and comparison algorithms used in this work are discussed in section4. The section 5 finally concludes the work.

II. MATERIALS AND METHODS
The details of the algorithms used in this proposed work are discussed in this section. The working of ANN classifier, the concepts behind JSA and the procedure of fusing JSA with ANN are elaborated below.

A. THE ANN CLASSIFIER
The ANN is a system that has parallelly distributed process. The architecture is known to have a huge number of less complex densely connected processors. The multi-layer perceptron architecture is widely used for at most of the major applications in the current era. Figure 1 displays the multilayer feed-forward network with three layers. This architecture shows the unidirectional connection of each neuron with the all other neurons of its adjacent layer strictly without forming a loop. That is, the input information is propagated in one direction that is ''forward direction''. Each of the connection bears a weight that are adjusted accordingly based on the learning rule, at the time of backward error propagation [18], [19]. Mostly, ANNs are trained dataset using a gradient descent optimizer called as back propagation algorithm (BP), which is gradient descent optimizer to minimize the factors with respect to error model that are generated during each training phases. During the training procedure, a non-linear mapping of the input variables to the output variables is attained. Thus, the non-linear relationship these variables are recognized by adjusting the weights using BP [20]. The steps for training ANN using BPN for classification is given as follows, In case of sigmoid function, it is calculated as in (3) 4: Output layer Computation 4.1: Net input at output layer: This is calculated for every output neuron m is calculated using (4), where w2 mh synaptic weight of hidden neuron h and input neuron n. b2 m bias of output neuron m. M -number of output nodes.

4.2: Actual output calculation
The network output given by each output neuron m is calculated using (5) as,

5: Error value estimation:
The subtraction of Network output from Target output is the error value obtained using (6) as, T j m and y j m are the target and network output respectively from j th training data. 6: Weight Adjustment: Minimize the error and improve accuracy. 6.1: weight update: Equation (7) gives the synaptic weight modification from output layer to layer as, where w2 j mh = ηδ From step 1 to step 4, the signals flow in the forward direction, after errors are calculated at step 5 and are propagated in backward path for weight adaptation. The set of weights biases that exists after the end of each training process is used to discriminate any unseen data. This final model is achieved based on the minimization of the error which is calculated at every iteration. The BP algorithm is mostly easily prone to trap in local minima and slow convergence. Reference [21], [22]. The metaheuristic algorithms are used to train ANN to overcome these problems.

B. THE JSA CONCEPTS
The Jellyfish search optimizer is a recently developed algorithm by Chou and Truong in 2020 [23]. This approach is devised based on the inspiration of artificial jellyfish movement and its behavior in ocean current. The interesting facts and the structure of model of the algorithm is briefly discussed in this section to express its contribution to this work. The jellyfish adapts itself for any scales of temperature and various shapes, sizes and colours in this world. Their preying behavior also differs. Their bottom part is like umbrella which helps this movement by pushing the water outward in order to move forward. The mass of the jellyfish is also represented as the Jellyfish bloom. This swarm formation happens when there are favorable situations like availability of oxygen, food and temperature. Three idealized rules defined for this algorithm are: • The movement of the jellyfish is either based on following the ocean current or jellyfish swarm. The time control mechanism governs these movements.
• The jellyfish move towards abundant food in inside ocean current.
• The location of the jellyfish and the corresponding objective function determines the quality of food. The jellyfish population (JF) is given as, where NP is the jellyfish population size. Each jellyfish is given as a D-dimensional vector as follows, Each element jf i,d is the d th dimension of i th jellyfish where d=1,2. . . . D. The population of the jellyfish is not initialized with random values in order to avoid slow convergence. The simplest chaotic maps such as logistic map are used for initializing the population. It is given as, where ω: 4.0, JF i : location of i th jellyfish based on logistic chaotic value and JF 0 : jellyfish, JF 0 ∈ [0, 1] of initial population.
The jellyfish attracts to the abundant nutrients available in ocean current. The ocean current's direction is given by (12) as, where JF * : the jellyfish with current best, A c : attraction governing factor, µ : mean location of jellyfish population, γ : distribution coefficient > 0 and experimentally found to be equal to 3.
The new location of i th jellyfish at time t+1 is given in (13).
The jellyfish performs two motions namely passive motion or Type A motion and active motion or TypeB. Initially they start with passive motion, and then over the time it is updated with active motion. The passive motion is based on its own location and the updated location at time t+1 is given as (14).
where UBSS : upper bound search space, LBSS: lower bound search space and α: motion coefficient>0 which is experimentally found as 0.1. The TypeB motion is based on the quantity of food that is found by jellyfish. The food quantity is decided by the objective function, f calculated using the corresponding jellyfish. A jellyfish JF j is randomly selected to find the direction of movement of the jellyfish_of_interest JF int ,. If the better food is found at the location of jellyfish JF j , then the jellyfish JF int moves towards it, else it moves away. This movement of jellyfish to better direction helps for effective exploitation of local search space. The new location of JF int is given in (15). where The jellyfish direction, The time control mechanism helps the jellyfish to decide its type of motion in the jellyfish swarm over the time and also its movement towards an ocean current. This mechanism includes a time control function tc ∈ [0..1] and a constant t0.
where t and iter max are respectively the iteration number and maximum number of iterations. If t c > t 0 , the jellyfish follows the ocean current, else they move inside the swarm. Then the function (1 -t c ) determine TypeA or TypeB, that is its movement inside the swarm. The rand(0,1) > (1 -t c ), then TypeA motion else TypeB motion. The boundary conditions are calculated and checked for the location update of the jellyfish based on the earth demographics. The ocean_current direction calculated using (12) 3.4: New location of jellyfish i is determined by (13) 3.5: else The jellyfish i moves inside the jellyfish swarm 3.6: if (random(0, 1) > (1 -t c ) The jellyfish will exhibit a Passive motion (TypeA) 3.7: New location of jellyfish i is calculated using (14) 3.8: else The jellyfish exhibits the Active motion (TypeB) 3.9: The direction of jellyfish i is evaluated using (16) 3.10: New location of jellyfish i is calculated using (15) 3.11: end if 3.12: end if 3.13: The boundary conditions are checked 3.14: Food quantity at new location is estimated 3.15: Location of jellyfish i, JF i is updated 3.16: The jellyfish with current best food is updated (JF * ) 3.17: end for 4: Increment t, t = t+1 5: Check for stopping criteria, t >iter_max 6: end for 7: Display the training MSE, best fitness value and computation time

C. CLASSIFIER JELLYFISHSEARCH_ANN (JSA_ANN)
The optimization algorithms are used in the neural networks for various tasks such as feature selection, tuning of hyper parameters, optimizing the synaptic weights and biases, and many others. In this work, the JSA algorithm is employed for optimizing the synaptic_weights and the bias values of ANN. This is clearly defined in Algorithm 1 (JSA_ANN) based on training and testing of the algorithm.

D. THE JSA-ANN CLASSIFIER METHODOLOGY
The ANN is loaded with the training data and the jellyfish population is initialized with Logistic maps. Also, the hyperparameters like attraction factor, distribution and motion coefficient are also initialized. The fitness (MSE) of every jellyfish is calculated by ANN and also each jellyfish in the population is a vector that is considered as a set of synaptic weights and biases. Hence every jellyfish is used to optimize ANN. The fitness is calculated for all jellyfish through a fixed number of iterations. The Framework of JSA-ANN classifier is shown in figure 2.
The jellyfish movement towards a location is based on the availability of food quantity. Its movement is calculated based on time control function that is either along with ocean current or into the jellyfish swarm. Then the location of each jellyfish is updated. These steps contribute for one iteration and the process is iterated to maximum. It is clear that the JSA generates ANN's synaptic weights and biases in the forward direction and in reverse direction, the MSE calculated by ANN is returned as fitness value of jellyfish to JSA. The optimal solution is best jellyfish calculated from the entire VOLUME 10, 2022 process. This jellyfish is deployed with ANN to test the unpracticed data. The JSA uses logistic map for population initialization to avoid random population. This helps JSA to escape from slow convergence and avoid getting trap in local minima. When compared with other algorithms, the JSA follows very simpler calculations.
It is found that the computational complexity of proposed JSA_ANN classifier with population size, n is found to be O(n).

III. EXPERIMENTAL SETTING
All the algorithms used in this work are built with the MATLAB 2020a. This is installed in the system with the configuration as follows, processor (Intel Core i5) with 2.7GHz speed and RAM (8GB). The randomness in the results is handled by conducting the experiments with 10-fold cross validation with 10 independent trials. The average of the output results is used for discussion. For comparison of JSA_ANN's performance, a few classifiers are built from scratch on combining ANN with different nature inspired metaheuristic optimization algorithms such as Bacterial foraging optimization (BFO) [24], Genetic algorithm (GA) [25], Particle_swarm optimization (PSO) [26], Invasive_weed optimization (IWO) [27] and Firefly optimization (FA) [28] [29], thus to obtain Bacteri-alforaging_ANN (BFO_ANN), Genetic_ANN (GA_ANN), ParticleSwarm_ANN (PSO_ANN), Invasive Weed_ANN (IWO_ANN) and Firefly_ANN (FA_ANN) classifier respectively. The performance is evaluated based on their capability of learning and its ability of generalization of Cervical cancer dataset with four targets. The fitness function used in this work is mean square error.

A. DATASET DETAILS
The cervical cancer dataset [11] taken from the UCI repository is employed during this study. This dataset contains 858 records with thirty two input attributes and 4 target variables namely Hinselmann, Schiller, Cytology and Biopsy. Hinselmann is the methodology to check the existence cervical cancer in an exceedingly patient by diagnosing their cells on a device named colposcope. Schiller is another test that create use of applied iodine to the cervix. Then if the iodine turns brown color, this can be the indication of healthy cells else if it seems to be yellow or white and conjointly remains to be unstained, then this can be noted because the presence of abnormal cells. Cytology, a microscopic anatomy of a cervical cell which is done with Pap-smear examination below a microscope. Biopsy is the surgical primarily based extraction that is accustomed collect the tissues or cell sample for additional examination to find the existence of the abnormality. This dataset contains more missing details and as this could have an effect on the performance of a classifier in generating biased results. Therefore, the records are preprocessed and the samples with missing values are unheeded. As a results of preprocessing, 2 attributes are ignored (considered only 30 attributes) and a complete of 443 samples are taken for experimentation. The details of variety of healthy and abnormal samples thought-about for four targets are given in Table 1. Binary classification is conducted for four target variables separately.

B. NETWORK SETTINGS
It is known that the ANN is also called as universal approximator. The function approximation is the mapping of input with output by updating the hyperparameters of the networks such as synaptic weights and bias values in a required number of iterations. Huge number of hidden neurons increases complexity that causes overfitting and less numbers affects the function approximation which in turn decreases its ability of generalization.
It is known from the literatures that for most of the function approximations, a 3-layered ANN is sufficient. The attributes present in the dataset decides the number of nodes in input layer and output layer. The hidden layer neurons are taken as 2N±1 [30], [31]. Hence for this work, number of nodes taken for input layer, hidden layer and output layer are taken as 30, 61 and 1 respectively, that is the architecture of ANN architecture is 30-61-1. Sigmoidal function is used in all nodes of the network as the activation function.

C. NETWORK SETTINGS
For any sample j, error value for each output neuron m is evaluated as the difference in target_output T j m and net-work_output y j m and it got using (19), This is mean square error (MSE) [32] for each training sample. Hence error is got for the whole training data of size S which gives overall performance of classifier. Fitness function (F) of metaheuristic optimization algorithm will be average MSE of all training records. This is formulated and given in (20) Minimization of MSE is measured as fitness value in ANN training.

D. SEARCH AGENTS ENCODING
The

E. HYPERPARAMETER DETAILS
The details of the hyperparameters of MHOAs (discussed in the Section 3) used in this work along with that of JSA algorithm are tabulated in Table 2. The population size and the maximum number of generations are set as 50 and 250 respectively, for all the experiments conducted. The experiments are conducted for the maximum generations.

IV. RESULT ANALYSIS
The Jellyfishsearch_ANN classifier is analyzed by comparing its performance with other comparison algorithms considered for this study. The results are analyzed in this section in terms of classifiers' learning capability and generalization capability.

A. CLASSIFIERS' LEARNING CAPABILITY
The proposed algorithm and its comparison algorithms are trained. The details of learning of these algorithms are visually depicted as convergence graphs using the figures Fig. 3(a-d). The convergence graphs that are used have following representations, number of generations in X-axis and the fitness values given in terms of MSE that are generated in every generation is marked towards Y-axis. The Table 3 provides the learning details in terms of Fitness value (mean MSE ± SD, best_fitness) and training computation time(Tr_Ct), that is obtained at each generation. It is observed that the learning capability of the proposed JSA_ANN improves on training, as the best fitness value is achieved. In this work, the minimization of MSE or the fitness value without getting struck in local minima is considered as convergence and this is achieved as expected after some iterations. It is observed from Table 3, that the JellyfishSearch_ANN has converged without getting into the local minima with the fitness values ± Standard deviation(SD) for the targets namely target_1, target_2, target_3 and target_4 as 0.00125 ± 0.00032, 0.00129 ± 0.00039, 0.00131 ± 0.00044 and 0.00119 ± 0.00050 respectively. These are the minimum mean square error produced by the proposed algorithm when compared with all other techniques. It can also be noted that both the algorithms namely Genetic_ANN and Jelly-fishSearch_ANN are found to be competitive as they have produced relatively similar fitness for the targets. Parti-cleSwarm_ANN comes subsequent to Genetic_ANN besides for target_1 as Bacterialforaging_ANN got the chance. The Invasive Weed_ANN has contributed a worst fitness value in terms of target_1 and Firefly_ANN has produced a worst fitness for target_2, target_3 and target_3 respectively. For tar-get_1, target_2, target_3 and target_4, JellyfishSearch_ANN has yielded the best fitness of 0.00063, 0.00060, 0.00064 and 0.00059 respectively.
The JellyfishSearch_ANN has taken a lesser computation time than all others, as 167.85s, 171.83s, 171.4s and 174.61s for the target_1, target_2, target3 and target_4 respectively. Next, a slightly higher time is utilized by Genetic_ANN. The Firefly_ANN has taken a largest time but failed to converge with optimal solution. From the convergence graph in   Fig. 3(a-d), it is graphically determined that the algorithms namely Bacterialforaging_ANN, Invasive Weed_ANN and Firefly_ANN have not shown any improvements after some generations and they struck in local minima and lost their ability of transiting from exploration to exploitation.

B. CLASSIFIERS' GENERALIZATION CAPABILITY
After the training of a classifier, its generalization functionality is constantly evaluated. The generalization functionality of classifier is its cap potential to categorize any unpracticed data. The testing details of all the classifiers is presented in Table 4 for all four targets. The testing output produced by the classifiers are analyzed in terms of the performance metrics such as specificity (S pec %), sensitivity (S ens %), precision (P rec %), MCR (MCR%) and accuracy (Acc%). As the dataset is imbalanced, the metrics such as G mean measure and F −score degree are also calculated to evaluate the classifiers' performance [32]. The confusion matrix representing the performance of the classifiers in the testing phase towards all the targets is displayed with the Figures 4-7.
It is inferred from the Table 4, that the Jellyfish-Search_ANN has yielded the accuracy of 99.324 %, 99.097 %, 98.871 % and 99.323 % for target_1, tar-get_2, target_3 and target_4 respectively. The next higher value of accuracy for all targets is achieved by the competitive algorithm, Genetic_ANN. The proposed Jellyfish-Search_ANN has produced (specificity (Spec%), sensitivity   respectively. This is the maximum score produced by the proposed algorithm among all other comparison methods. Genetic_ANN produces the next higher score in terms of all measures. The above discussed performance metrics gives the overall performance of the algorithm irrespective of the classes.
To analyze the performance of classifiers with respect to the classes, the two metrics namely Gmean and F-score measures are included. This is because of the skewness found   in the dataset as the number of normal samples are more when compared to unhealthy samples. This condition mostly prevails in datasets normally built for binary classification. In a condition such as the classifier might yield higher accuracy values though it wrongly classifies the unhealthy samples due to the negligible number, then Gmean and F-score will give the value as zero. It is explicitly found from Gmean and F-score produced by JellyfishSearch_ANN that the classifier has given an outstanding performance on perfectly discriminating the samples of both the classes. The proposed method has given equal preference in classifying both the classes as its Gmean and F-score is greater than 96 and 91 respectively.
Again Genetic_ANN has generated the next higher values of Gmean and F-score. Then comes the classifiers Invasive Weed_ANN and Firefly_ANN with some reasonable discrimination of samples. But ParticleSwarm_ANN classifier and Bacterialforaging_ANN classifier have only concentrated in classifying correctly the healthy samples. Also, these classifiers have produced a lower values for all other matrices, thus with an equal performance. Firefly_ANN is the lowest performer when compared with all others.
The accuracy produced by the traditional ANN from the recent works carried out in [33] and [34] for the dataset [11] are found to be 97.67% and 95% respectively. This is inferred to be lesser than our proposed approach.

V. CONCLUSION
The intelligence living behaviour of artificial jellyfish is employed to optimize ANN. This classifier conjointly referred to as JellyfishSearch_ANN is applied for accurate and efficient classification of cervical cancer dataset that has four targets. To validate the efficiency of the proposed work, five similar type of ANN classifiers are implemented using other metaheuristic algorithms such as PSO, IWO, GA, FA and BFO. The learning capability is analysed in terms of mean fitness, best fitness and computation time. It is found only the Genetic_ANN has produced some reasonable results but lesser than the proposed approach. But other comparison algorithms are found to be struck with local minina without further improvements when the population size and generations were increased. But a considerable progress is seen in the proposed approach. The performance evaluation metrics such as precision, specificity, sensitivity, Gmean, MCR, F-score and accuracy are used to calculate classifier's generalization capability. Hence from the experimentation results, it is confirmed that JSA-ANN has yielded the outstanding results for all the targets than all other comparison classifiers. Among all the targets, it is also observed that the proposed classifier has achieved best result for the target1 that is target values based on Hinselmann test. The limitation of this project is that a particular feature selection algorithm is not used. In future, it is planned to use a suitable feature selection algorithm with this work to further increase the accuracy. This algorithm can also be used for the effective diagnosis of other diseases that are challenging the medical domain. He is having more than 17 years of Teaching Experience. He has organized various international conferences and delivered keynote addresses. He has published more than 60 research papers in various peer-reviewed journals and conferences. He is a Life Member of ISTE and a member of ACM and MIR Labs. He was a recipient of the Best Advisor Award from the IEEE Hyderabad Section as well as the IUCEE Faculty Fellow Award (2018). He has also been the Co-ordinator of the AICTE 'Margadarshan' Scheme, a Google Cloud Facilitator, an Editor of IJDMMM journal published by Inderscience, and the Academic Editor of Computer Science (PeerJ) journal, in addition to being the reviewer for several Scopus indexed and SCI indexed journals V. UMA MAHESWARI (Member, IEEE) received the Ph.D. degree in image analytics and data science from Visveswaraya Technological University, Belgaum. She is currently working as an Associate Professor, the Head of CSE (AI and ML), KG Reddy College of Engineering and Technology, Hyderabad. She has published more than 20 research papers in SCI, ESCI, WoS, DBLP, and SCOPUS indexed journals and conferences. She has also published four Indian patents on facial expression analysis in the fields of medical, e-commerce, education, and security. She has done an enormous study and given contributions in facial expression analysis and applications. She constructed feature vector for a given image based on the directions and introduced dynamic threshold values while comparing the images, which helps to analyze any image. She has researched the similarity of images in a given database to retrieve the relevant images. She also worked with convolutional neural networks by giving the pre-processed input image to improve the accuracy. It has been proved that the maximum edge intensity values are enough to retrieve the required feature from the image instead of working on total image data. She is the Co-ordinator for TEDxVCE. She has organized various technical programs and served as a technical committee member and a reviewer for various conferences. She has delivered sessions in various capacities. She received the Best Faculty Award under the innovation category from the CSI Mumbai Chapter for the year 2019 S. SHITHARTH (Senior Member, IEEE) received the B.Tech. degree in information technology from the KGiSL Institute of Technology, Coimbatore, India, in affiliation with Anna University, Chennai, India, in 2012, the M.E. degree in computer science and engineering from the Thiagaraja College of Engineering, Madurai, India, in affiliation with Anna University, in 2014, and the Ph.D. degree from the Department of Computers Science and Engineering, Anna University. He is currently pursuing the Ph.D. degree with the University of Essex. He worked in various institutions with a teaching experience of seven years. He is also working as an Associate Professor with Kebri Dehar University, Ethiopia. He has published more than 45 international journals along with 20 international and national conferences. He has even published four patents in IPR. His current research interests include cyber security, blockchain, critical infrastructure and systems, network security, and ethical hacking. He is also an Active Member of IEEE Computer Society and five more Professional Bodies, and also a member of the International Blockchain Organization. He is a Certified Hyperledger Expert and a Certified Blockchain Developer. He is an active researcher, reviewer, and editor for many international journals.