A Deep Learning Approach for Brain Computer Interaction-Motor Execution EEG Signal Classification

Recently Noninvasive Electroencephalogram (EEG) systems are gaining much attention. Brain-computer Interface (BCI) systems rely on EEG analysis to identify the mental state of the user, change in cognitive state and response to the events. Motor Execution (ME) is a very important control paradigm. This paper introduces a robust and useful User-Independent Hybrid Brain-computer Interface (UIHBCI) model to classify signals from fourteen EEG channels that are used to record the reactions of the brain neurons of nine subjects. Through this study the researchers identified relevant multisensory features of multi-channel EEG that represent the specific mental processes depending on two different evaluation models (Audio/Video) and (Male/Female). The Deep Belief Network (DBN) was applied independently on the two models where, the overall achieved classification rates were better in ME classification compared to the state of art. For evaluation four models were tested in addition to the proposed model, Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Brain-computer Interface Lower-Limb Motor Recovery (BCI LLMR) and Hybrid Steady-State Visual Evoked Potential Rapid Serial Visual Presentation Brain-computer Interface (Hybrid SSVEP-RSVP BCI). Results indicated the proposed model, LDA, SVM, BCI LLMR and Hybrid SSVEP-RSVP BCI accuracies for (A/V) model are 94.44%, 66.67%, 61.11%, 83.33% and 89.67% respectively, while for (M/F) model, the overall accuracies are 94.44%, 88.89%, 83.331%, 85.44% and 89.45%. Finally, the proposed model achieved superiority over the state of art algorithms in both (A/V) and (M/F) models.


I. INTRODUCTION
(BCI) is an interface that interacts directly with brain signals and a machine. A BCI is a system that analyses the operation of the Central Nervous System (CNS) and transforms it into an artificial output that replaces, recovers, and improves the output of natural CNS and consequently changes the continuous interactions between CNS and its internal or external environment [1].
Recently BCI systems became a completely interesting area as it can serve in different applications that are essential to the everyday lives of people, particularly people with physical disabilities and psychological diseases [2].
The associate editor coordinating the review of this manuscript and approving it for publication was Ludovico Minati . a person can improve modeling, research, assisting, security, entertainment and identification by control brain waves as human computer interactions [3]. Additionally, it can support useful applications for mobility impaired people such as sensory-motor tasks and wheelchairs.
In addition to that BCIs can be useful in healthcare as is an efficient technology for individuals to communicate with the outside world based on real thoughts that a machine recognizes as a pattern which can be identifiable. These actions are performed by using external devices in many important fields as car drone and robots. These thoughts have specific neural patterns that are recorded by scalp electrodes its signals are applied by computer using algorithms [4]. BCI helps in managing applications through interaction of the brain activity measures, then, classifies it to control tools VOLUME 9, 2021 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ similar to spelling application, computer games, and inventive expression, and finally it provides feedback on behavior.
Significantly the brain signals act behind the incidence of any feeling like comfortable state, focus, concern and awareness [3]. Brainwave, which is known as neural oscillation of membrane potential and rhythmic patterns of post-synaptic action potential could be monitored by EEG.
It is a method of EEG analysis to both monitor the brain's electrical activity and evaluate the voltage changes which is obtained from ionic field at brain neurons intervals.
In clinical terms, the EEG graph is the result of continuous electrical activity of the brain which is measured over time from different electrodes on the scalp [1]. EEG is BCI non-invasive measurement of brain [5] waves method that is common compared with other signals mechanisms such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). It is utilized in all modern BCI applications and is more feasible than Electrocorticography (ECoG) which needs a direct access to the brain tissues. EEG needs portable and cheap equipment, easy access and gives high time resolution signal, when applied and examined by healthy subjects for research and applications [6].
In this study, we propose a (UIHBCI) signal processing model to classify signals from fourteen EEG channels that are used in recording reactions of brain neurons of nine human subjects. The proposed model mainly consists of three stages: signal processing, feature extraction using Independent Component Analysis with Automatic EEG artifacts Detector algorithm (ICA-ADJUST), and classification using (DBN). Feature extraction is the key process which determines the significant classes of brain signals and dimension reduction. In this research, multisensory features, and classifiers have been tested. The main aim of this paper is to design and implement the best model for feature extraction and classification of BCI signals, and to introduce a Hybrid BCI (HBCI) classification model based on both (ME) movement left/right hand using Audio/video (A/V) stimuli. Two models were introduced and tested, the first BCI model spotted the ME based on (A/V) stimulus while the second spotted ME based on recording (A/V) stimuli to classify Male/ Female (M/F) classes. The evaluation process of these BCI models has been performed using both Holdout validation (HO), and k-fold cross-validation.
The rest of this paper is organized as follows. Section 2 provides a background of related works of the Deep Neural Network (DNN) and HBCI. Section 3 discusses in detail proposed the UIHBCI model with each process and explains BCI evaluation classification algorithms. Section 4 explains the experiments. Section 5 involves results and discussion, and finally, section 6 states the conclusion.

II. RELATED WORK
Implementation of the BCI model is always constrained by both poor classification precision and limited generalization capabilities [2]. Deep learning (DL) represents a specific area of machine learning algorithms influenced by brain structure and work. Architectures for DL include (DNN), Generative Adversarial Network (GAN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN) [7]. DL is an excellent technique for classifying and presenting precisely with high learning ability the rapid changes in brain signals.
In this section some of the relevant studies that were introduced in the BCI field as in [8], [9] Kumar et al. and Lu et al. who have applied DBN in Motor Imagery (MI) -EEG classification, while Loo et al. [10] had analyzed EEG signals with a distinct wavelet transformation based on denoising Autoencoder (AE) then implemented a DBN-AE with an accuracy of 73.86 %. For EEG Mental Disease diagnosis, DBN is one widely used method [11].
Al-kaysi et al. [12] introduced a multi-view DBN-Restricted Boltzmann Machine (RBM) framework to analyze EEG signals of depression patients. As for Alzheimer's Disease diagnosis, Jawahar and Shan [13] designed a DBN-RBM with three RBMs to obtain informative representations. In [14], this paper combined DBN-RBM with Hidden Markov Model (HMM) with 87.62% accuracy. For Sleeping EEG models, Tan et al. [15] applied a DBN-RBM algorithm to predict sleep spindle from the sleeping EEG signals features. Deep-RBM (D-RBM) exists in a few pieces research as compared to certain studies. Zheng et al. in [16] and Zheng and Lu [17] implemented a D-RBM with five hidden layers of RBM to analyze important frequency bands and emotion detection channels. Jia et al. in [18] proposed RBMs for channel selection and classification.
In Emotional EEG, DBN-RBM is commonly utilized for unsupervised emotion recognition [19]- [21]. Furthermore, Xu and Plataniotis [22] implemented a DBN-RBM model with three RBMs and an RBM-AE that surmise the person's affective states. Xu and Plataniotis [23] used DBN-RBM for classification. Sarkar et al. [24] constructed two DL models to determine the EEG signals produced by audio or visual stimuli based on Visual Evoked Potential (VEP) and Auditory Evoked Potential (AEP). For this binary classification task, the implemented DBN-RBM with three RBMs obtained an accuracy of 91.75%. In [25] Ma et al. obtain DBN-based motion-onset VEP BCI features to construct BCI system accuracy achieved of 97%. Liu et al. [26] based on the VEP signals and integrating the DBN-RBM with (SVM) classifier for the hidden data test and obtained an accuracy of 97.3%. Hajinoroozi et al. [27] implemented a DBN-RBM study to control EEG signals that are preprocessed by ICA with 85% classification accuracy that can detect driver cognitive state alert ('drowsy' or 'alert'). San et al. [28] integrated DL models to determine driver fatigue used a DBN-RBM and SVM classifier with proposed system accuracy of 73.29%. Last, few papers studied the Event-Related Desynchronization /Synchronization (ERD/ERS) singles. Kulasingham et al. [29] improve the Guilty Knowledge Test using P300 signals applied the DBN technique with a mean accuracy of 86.9%.
Antelis et al. [30] have used Dendrite Morphological Neural Networks (DMNN) for recognition of voluntary movements from (EEG) signals. Their target was to rectify the DMNN recognition performance of ME and MI tasks using (SVM) and (LDA). The results showed that DMNN provided decoding accuracies of 80% for ME,77% for MI, the precision of 0.70, recall of 0.68 and F1 of 0.69. The Mean ± standard deviation value for ME using SVM is 74.61 ± 6.04, and ME using LDA is 71.50 ± 4.84. Virgilio et al. [31] have shown studies of Spiking Neural Network (SNN) using temporal features in the recognition of MI tasks from EEG signals and compared their performance with other classifiers commonly used in this application. To put all previous into numbers, Table 1 summarizes some of the recent studies that were proposed to discuss the DL approach for BCI systems, offering a comprehensive overview of these studies, clarifying DL-based DBN with recently applied systems provided by classification accuracy. A lot of these studies are related to the non-invasive EEG signals that include spontaneous EEG, Event-Related Potentials (ERP), ERD/ERS, Driver cognitive, and HBCI. Also, some of the EEG features as Steady-State Visual Evoked Potential (SSVEPs), (VEP) and (P300) besides HBCIs recently paradigm as transient MI+VEP, MI/ME+(SSVEPs), and P300+(SSVEPs) are introduced in Table 1.

III. UIHBCI PROPOSED MODEL
In this research, the BCI system is practically divided into three main categories based on stimulus models: auditory, visual, and ME. A proposed UIHBCI system combines more than one type of BCI. ERP has three kinds of comprehensive research and clinical use: P300 ((VEP), (AEP)) and (ERD/ERS). Here, subjects perform an ME task to provide continuous effective output [32]. The output of the proposed UIHBCI is based on hybrid multisensory features-related brain signals. These features improve accuracy by providing the classifier with more information. The hybrid system shows better performance when it is compared with either the (SSVEP) or P300 or (ERD/ERS) [6]. The existing studies improve the classification accuracy of ME-BCI by introducing EEG features [33]. EEG signals are used to classify each subject signal depending on the auditory and visual stimulus based on ME. As a result, the BCI will become a multiple system. Furthermore, it has a combination of stimulations to improve the performance of classification accuracy [34].
As mentioned before, the purpose of this study is to introduce a UIHBCI model to classify signals from fourteen EEG channels that are used to record the reactions of the brain neurons of nine subjects.
Three main stages together with the proposed model framework of the current study are described in Fig. 1 that shows the structure of UIHBCI which includes: EEG signals processing, feature extraction, and finally the classification stage. Each of these stages will be described extensively in the next sub-sections.

A. SIGNAL PROCESSING
In this paper, main EEG signal paradigm types have been used as HBCIs to increase the accuracy and robustness of the BCI system to reduce noise effects in EEG signals. These signals are: (ERP) that are divided into P300 ((VEP) and (AEP)) and (ERD/ERS)) [6], [35].
1. ERPs are electrocortical signals that were evaluated and recorded using EEG, after or during a (psychological/motor or sensory) event. The two most popular ERPs are: a) The P300 wave is a form of event-related potential that happens in the human brain as a positive deflection with a time delay of nearly 3300 ms after the occurrence of a particular event. BCI researchers have developed an electrical potential that can be derived from the scalp after a visual and auditory stimulus [36]. They are called visual and auditory stimuli to produce a P300 signal for different systems and applications ((VEP) and (AEP)) [6]. 2. The Sensorimotor Rhythm (SMR) was supported by some somatosensory areas and detected over the motor cortex. MI and SMR movement elaboration are reduced or increased during movement. The MI is divided into two types [37]: a) ERD: the signal is lower than the baseline when it is measured during movement that is resultant in de-synchronization of the activity of particular brain regions. b) ERS: the signal is greater than baseline when it is measured during movement. The direction of the signal (the side of the limb) varies according to the limb's movement (either on the right or left limb) may affect the signal. Identical EEG signals can be produced by imagination of movement without necessarily executing it [6].
One of the advantages of the hybrid multisensory features-related brain signals method which used through this paper is the use of more than one brain signal to help clinicians to handle more knowledge per unit of time that is relevant to patient intent. Execution of HBCI's is done by different methods: differentiating two signals consecutively; the first activates the system whereas the second permits the choice of the output, or at the same time the user is asked to perform several tasks simultaneously [38].
The algorithm and the user are the two BCI learning factors. Various feature extraction and classification algorithms have been implemented for BCI signals paradigms processing are classified into three categories: training-free methods, user-specific training methods, and user-independent (UI). First, training-free algorithms do not need training data, and the BCI user can use the system right away. Secondly, user-specific or user-dependent training methods need each user to provide training data from which a user-specific model is created. Finally, UI involves training data from several individuals, after which a generalized model, called a ''UI'', developed to be applied to unseen users [39].
The proposed model is a generalized model where the EEG signals for training and testing represented response data set from different persons. Moreover, the proposed UIBCI is not dependent on a person's recorded data as our results in two evaluation models (A/V) and (M/F) showed and proved that the brain activity that represents specific mental processes is identical in specific cortex and human brain signals patterns.
Each subject's raw EEG data is implemented in EEGLAB scripts MATLAB. Linear trends are removed and power spectrum density is obtained using Fast Fourier Transform (FFT) to calculate the rhythmic component of EEG signals [40]. Fig. 2 draws the signal's power spectrum with frequency (2-26) Hz. It plots the channel spectra and associates topographical maps, selects the first 15.0% of data for analysis. In addition, it shows the spectrum and scalp map for some channels with alpha (7∼13Hz) and beta (14∼25Hz) rhythm components, reflecting where movement-based signal transmission happens (F3, F4, AF3, AF4) and auditory response extracted (O1).
The Frequency versus time domain for Female/Audio Stimulus is given in Fig. 3. EEG continuous Data scrolling, and Values of the data point are seen continuously at the bottom of the figure and event after stimulus appears in bold blue vertical lines. Fig. 4 shows multi-dimensional EEG channels with five data epochs plotted at 14 electrodes after de-noising using linear Finite Impulse Response (FIR) filtering to noise-reduction with a cut-off frequency of 1 (Hz) as the Lower edge frequency band-pass, 50-60-Hz line noise as high edge frequency band-pass. Fig. 5 shows EEG signals after artifacts elimination due to the event-based EEG dynamics recorded data continuously, then extracted time-locked data epochs to a significant auditory stimulus with left-right hand movement, and finally, the mean baseline values were removed from each epoch.
In BCI systems, EEG signals needed pre-processing to improve Signal-to-Noise Ratio (SNR) and to prepare it for constructing an effective BCI model [41] followed-up by feature extraction and classification processes. The most widely used techniques include ICA or Principal Component Analysis (PCA). ICA algorithms can separate artifact brain activity from muscle and blink artifacts spontaneously to produce EEG sources whose EEG inputs are as independent as possible of one another.

B. ICA DECOMPOSITION AND REJECTION
In the research community ICA is used to identify and eliminate stereotyped artifacts of eye, muscle, and line noise artifact [42]. There were many attempts by ICA to achieve data projections that have minimal temporal overlap. The basic mathematical principle of ICA is to distinguish the sub-gaussian and super-gaussian distribution of the sources in order to reduce the mutual information between the data projections or to maximize their joint entropy [43]. Regarding source information issues about EEG data, ICA plays an active role and separates those activities from the channel mix recorded. ICA is a widely understood method of  decomposing multivariate EEG data into its statistical independent and non-gaussian components [44].
The efficiency of applying ICA decomposition refers to the following reasons: 1. The number of recorded signals is greater than the number of (IC's). 2. The sources of the cerebral and artifacts in the signals recorded are statistically independent and linearly mixed. 3. The characteristic time lags are trivial around the mixing center. EEG mixing approach has many issues, integrating and avoiding the functionally separate and independent source inputs and loss-data information. These sources of information can imply synchronous or partially synchronous conduct in cortical or non-cortical sources [45]. In the ICA decomposition, IC filters using extended-infomax ICA decomposition are implemented to generate the optimally temporarily independent channel EEG signals using the EEGLAB toolbox feature runica [46].
Studying and determining artifact type component of EEG signals datasets in this paper is based on: the scalp map, investigating EEG component in both time and power spectrum domains. These components determine artifact type. In Fig. 6, 14-EEG-ICA scalp maps (channels) are plotted to describe each component's properties and label rejection components for subtracting them from the recorded signals. It shows various activities of EEG records, regarding the 14-Independent Components (ICs) produced by the ICA algorithm.
EEG faces a major problem when artifact activity amplitude is higher than the one generated by neural sources. ICA is a successful elimination artifact method that is measured from EEG recordings [47], [48] when using it as a feature extraction technique.
ADJUST is a new powerful automated process characterizing IC artifacts in ICA decomposition by integrating the stereotyped spatial and temporal features above that are provided by artifact. Enhanced features are used to detect eye movements, blinks, and common discontinuities. When artifacts ICs are recorded, they are removed from the data while keeping neural sources that are unaffected [47].

C. FEATURE EXTRACTION
ADJUST was applied to evaluate and enhance each IC spatial and temporal feature. Each class of artifacts has one spatial and one temporal feature [49]. Features Extraction    [49]. After features extraction computations are applied, Fig. 6, displays topographies of all ICs ordered by the 14-IC channel variance percentage, while ADJUST classified ICs which are labeled with a red indicator number panel in Fig. 6b. Fig. 6a, illustrates the fourteen ICs channels dataset ICs 3, 4, 6, 13 have been classified as artifacts.
ICs topography with channel classification using ADJUST show in Fig. 7. These results are explained as follows: IC component 3 (IC3): IC3 detects HEM from the non-symmetric frontal topography and the negativepositive contrasting signals, while its activity spectrum is smoothly decreasing in the ERP picture. This component is classified as a HEM by ADJUST since both SED and MEV are threshold passing. Because the features of TK, MEV, and SAD pass the specific thresholds and the frontal topography is asymmetrical as shown in Fig. 7a, the component is not marked as EB or VEM. Fig. 7b below. IC-component 6 (IC6): IC6 captures HEM. It is labeled as a HEM and is clearer here than IC3 and IC4. The non-symmetric frontal topography and the contrasting negative-positive signals in the ERP images are displayed in Fig. 7c. IC-component 13 (IC13): IC13 captures GD and peaks with large amplitude in the ERP plot. It is classified as GDSF because GD is a spatial feature associated with MEV and they cross the threshold. Both SAD and MEV features cross the threshold, as shown in Fig. 7d and labeled as VEM.

IC-component 4 (IC4): IC4 also captures HEM as shown in
ADJUST captures clear ICs such as IC7, IC12, and IC14 as shown and explained in Fig. 7(e, f, g). IC7 visibly takes a VOLUME 9, 2021 brain activity and in this status, it cannot classify artifact because the value of TK passes the threshold, but the spatial feature SAD associated with TK, which does not cross the threshold as shown in Fig. 7e. In addition, IC12 captures an activity of neural origin and SAD but this component cannot be eye blink because TK is clearly below the threshold.
On the other hand, IC12 is not VEM because MEV associated with SAD does not exceed the threshold described in Fig. 7f, IC14 explicitly detects brain activity in Fig. 7g because the value of one spatial feature GDSF exceeds the threshold, but MEV temporal feature is below the threshold. The source signal in all ICs when plotting scroll EEG is not changed or affected by the ICA process. Hence, the source signal does not distort, and this is the robustness of ICA. Table 2 shows feature values and lists all artifacts ICs.  Fig. 7, the bar value for each feature = (feature value-median of the first peak of bimodal features distribution) / (threshold-median of the first peak of bimodal features distribution) is illustrated [48].
After applying ADJUST algorithm, the component numbers 3, 4, 6, and 13 are marked as artifacts (as shown in table 2) and then rejected. Fig. 8a shows the EEG continuous data scroll and channels before ADJUST's rejection. The Blue EEG data plot indicates channels before rejection,  while the red plot represents channels after the rejection process. Fig. 8b displays optimally independent components, and describes 10-ICs components that result from (ICA-ADJUST) after subtracting 4-ICs artifacts components though Fig. 9 displays each ERP component data.

D. CLASSIFICATION PHASE
Deep Learning (DL) is a model manipulating a layer-by-layer learning method. Neurons perform the Nonlinear Multi-layer of data used in DL [50]. So, DL genuinely extracts embedded and latent sophisticated features of signals that help and enhance classification [50]. A DBN consists of simple, unsupervised, several layers. learning architectures of (RBMs) and one layer of a backpropagation network (BP) [51]. The RBM represents the main elements of the (DNN) training.
The RBM represents an artificial stochastic neural generative network capable, the learning of it depends on the learning probabilities distribution of inputs [50]. This paper applied the DBN-DNN with two evaluation models (A/V) and (M/F). This evaluation aims to prove that signal processing, and feature extraction methods are used to produce high performance with (DBN) in the classification phase. The high  The DBN-DNN is pre-trained by a stacked RBM using an unsupervised independent training algorithm called Contrastive Divergence (CD) [52]. In RBM, visible and hidden layers are constructed without layer relations. The nodes are connected across visible and hidden layers, but nodes in the same layer are independent of each other as in Fig. 10. Each node is presented as the computation point that processes input and begins making stochastic decisions about input [53]. A DBN utilizes a greedy system of layer-wise learning. The results of the trained RBM's binary hidden units of the trained RBM will be used as the training data for the next RBM layer [50]. DBN-DNN is built by adding a decision layer to the stacked RBM [50], [52]. Implemented DBN-DNN algorithm is fine-tuned by the applied Error backpropagation algorithm [52]. The loss function of the fine-tuning is represented by a cross-entropy, and the sigmoid function is an activation function. Fig 10 shows the architecture of DBN-DNN implements with a five-hidden-layers neural network that is built using 4096-1024-512-128-32 nodes. It constructs with two visible layers (input and output). It contains one input layer that consists of (215433 nodes) and one output layer that consists of (2 neurons). The experiment runs 10-times and chooses the best following experiment setting values. During the pretraining, the weights are initialized according to the given samples instead of contingent on random initialization. During the pre-training phase of the DBN model, the learning rate is 0.01, and the batch size equals 4 with 100 iterations. Table 3 illustrates the experiment setting. The one-hot encoding is used to represent labels of the problem, so our problem is a binary classification. The classification phase depends on using the (CD) to recognize whether a particular person has been influenced by (Audio or Video) stimulus in the first experiment or determine the response is gained by a (Male or Female) gender in the second experiment. The gender classification model has been conducted using the same DBN network. Where input layer is visible followed by the hidden layers of 4096 units, 1024 units, The pre-training phase is essential for initializing weights of the DBN besides developing the RBM architecture that is cascaded and stacked before the DBN architecture. The constructed RBM architecture depends on the number of input neurons and output neurons. Since we already applied a one-hot encoding schema, the output is [1 or 0] at each neuron in the output layer. At the same time, the input signals are not binary values; they are transformed during feeding forward into the network due to the probabilities, normalizations, and dropout in the range [0,1]. Due to [54], the dropout procedure is performed randomly by dropping out (zeroing). hidden units and input features of DBN architecture during the feedforward training. The performance is evaluated with two metrics: Root Mean Square Error (RMSE) and Error Rate (Err).

IV. SIMULATION SETTING A. DATABASE AND SIMULATANEOUS EEG RECORDING
BCI signals were recorded using the Emotive EEG Headset and Simulink MATLAB, with 128 [Hz] sampling rate from 14 electrodes as per the international 10-20 international system sensor locations [55] (AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, and AF4) were used. For each visual and auditory response data signal was recorded from [4] and saved for nine healthy volunteers (aged 12:24 years, 4 males and 5 females). Data signal processing was applied on 18 European Data Files (EDF) signals recording to record the reactions of the neurons within the brain to two forms of stimuli base on ME in nine participants. The signal recording experiment concludes as follows: 20 left and 20 right sounds in speaker appeared in random order by the computer that acts as an auditory stimulus. The subjects were asked to raise their left/right hands based on the left/right stimulation beep from the speaker. Again, there are 20 (left/right) arrows that appear randomly on the computer screen.
The Emotiv EEG Neuroheadset was placed to ensure that the ground sensors were positioned behind the ear over the temporal bone of the skull so that a large number of the front sensors were placed three finger-widths above the eyebrow. Its setup ensured the suitability of the link between sensors and individual scalps. It was supported with 14 active sensors. Each sensor was wetted with saline solution before putting on the headset.
During the experiment, the human subject manages to upraise the right arm when the right arrows appear. In addition, the human subject manages to upraise the left arm when left arrows appear and so on during the session as a simulation of arrows' movements. This procedure acts as a visual stimulus. Each visual and audible signal recording session took eight minutes and between each session, there was a 10-seconds interval. Each testing session took 35 minutes.

B. EXPERIMENTS
Two experiments were performed for evaluating the (A/V) and (M/F) proposed models based on ME. In the first experiment, the evaluation models use the HO validation technique that splits the data signals into two parts train and test in the classification phase into seven subjects as a training set and two subjects as a test set. The second classification experiment consists of (2-classes) depends on gender, (M/F) and (A/V) used 10-fold cross-validation model applied on proposed DBN algorithm and two classifiers: LDA and SVM. By repeated 10-fold cross-validation, the cross-validation is performed out in a way that depends, at one time, on the exclusion of a sample represented as inter-users. In another sample, it was excluded as intra-users. This represented the rest of the elements. This representative sample was selected to represent the validation process that is a part of the results evaluation and monitoring process as in section 5.   The training and testing EEG recording data signals for inter and intra users are explained in Fig.11, each block represents a run or no-run-fold related to the EEG recording for two models (A/V) or (M/F). Every run fold was used as the testing dataset once; thus, 10-cross-validation iterations were performed for each model (A/V) or (M/F). Results on the average performance are shown in tables 5 and 6 in terms of correct classifications and errors.
The implemented system was trained and evaluated on the EEG-ME data set from [4] that was based on Audio/Video stimulus. The training process is applied on HP workstation Z800 machine, Intel Xeon 4-cores, 34GH/12MB cache, 64GB, Windows 10 64-bit professional. All algorithms were implemented using MATLAB R2018 x64bit. EEGLAB scripts are used for signal processing [56].
Romero-Laiseca et al. [57] proposed a BCI model based on EEG for lower-limb motor recovery of post-stroke patients was implemented using Riemannian geometry for feature extraction, Pair-Wise Feature Proximity for feature selection, and LDA for pedaling imagery recognition. The Model mean accuracy that has been achieved is 69.00%. On the other hand, tables 5 and 6 give more enhanced results compared with [57] developed model when we applied the DBN algorithm. Ko et al. [58] developed a hybrid SSVEP-RSVP BCI model to improve the performance of classifying the target/non-target objects in a multi-target scenario by using 12-EEG channels. In addition, by applying the decision tree classifier, the experimental results showed that our model performs better than [58] that used the Bagging Tree algorithm classifier with a mean accuracy of 83.45%, as shown in tables 5 and 6.

C. EVALUATION METRICS
In this paper, the two models (A/V) and (M/F) were evaluated by using two metrics performance in (1) and (2): (RMSE) and (Err) defined by [52]: where C is the output classes' number, D is the data numbers, D inc is the incorrect classes' number, x k is k th is input data, k is the base true output, and V is a derivation of the DBN-DNN model. The DBN-DNN proposed models were evaluated by applying the (RMSE) and (Err) that are measured in Table 4, in which 4 lists the (RMSEs) and (Err) for training and test data. The second experiment of this paper use three classifiers as that can be measured by using common performance evaluation matrices like Precision [59], Recall or Sensitivity [60], Specificity [59], Accuracy [60], F1-score [60], AUC or Balanced Accuracy (BA) [61], Mathew's Correlation Coefficient (MCC) [59], Kappa Index [62], Critical Success Index (CSI) [63], Bookmaker Informedness (BM) [64] and Markedness (MK) [64]. Audio/Video and Male/Female  classifications models are evaluated by these performance metrices that are illustrated in Tables 5 and 6.

A. RESULTS
Two experiments, as described in the experiment section, are applied, and compared two of the most popular algorithms for BCI applications SVM and LDA. The results of the two classifiers to be compared with DBN are illustrated in Tables 5 and 6. To determine the performance of the two experiments, it is possible to find the EEG data, which is related to (A/V) stimulus. In addition, differentiating between the human genders (M/F) is a worthy perspective to be considered. For clinical psychophysiology the impact of gender on the EEG signal has been studied [65]; EEG gender classification help in building automatic gender systems based on the person's EEG features. On the other hand, EEG data indexing useful for individual scans, recognition or verification, and enhancement of BCIs [66]. In [67] deep convolutional neural networks are used and demonstrates that brain rhythms VOLUME 9, 2021  are sex-specific information that can be obtained from scalp EEG. Moreover, this proposed model would also effectively help specialties in, neurology, cardiology, and neuropsychology using temporal and spectral characteristics.

B. DISCUSSION
After computing the confusion matrix, in the first (A/V) model as in Fig. 12 we conclude that our proposed model achieved the best results with resultant accuracy that equals 94.4%. Moreover, classification methods results are compared as follows: The confusion matrix confirms DBN accurately classify 100% of video data signals while misclassifying 0.0% of the audio data signals. It can accurately classify 88.9% of audio data signals, and misclassify 11.1% of video data signals.
In the first (A/V) model, we conclude that our proposed model is the most efficient by comparing our classification methods results as follows: • The Proposed method obtained 94.44% accuracy while, the SVM, LDA, BCI Lower-Limb motor recovery, and Hybrid SSVEP-RSVP BCI obtained 61.11%, 66.67%, 83.33%, and 89.67% respectively that are illustrated in Table 5.
• Analysis of confusion matrix reveals that DBN accurately classify Audio/Video data signals which are more accurate than SVM, LDA, BCI Lower-Limb motor recovery, and Hybrid SSVEP-RSVP BCI. The results clarified that the proposed DBN is the most efficient classifier, which reflects the superior performance classification of ME signals.
In the second (M/F) model as in Fig. 13, the resultant accuracy applied by DBN is 94.4%. The confusion matrix shows how the optimized proposed model DBN, can accurately classify 100% of the data signals as male while misclassifying 0.0% of data signals as male. Moreover, it can accurately classify 90.0 % of the data signals as female and misclassify 10.0 % of data signal as female. In the second (M/F) model, the results also proved that the method is the most efficient when they are compared with classification methods results. The Proposed method obtained 94.44% accuracy whereas SVM, LDA BCI, Lower-Limb motor recovery, and Hybrid SSVEP-RSVP BCI obtained 83.3%, 88.9%, 85.44%, and 89.45% accuracy respectively in Table 6. Analysis of the confusion matrix indicates that DBN-DNN can classify Male / Female data signals more accurately than the state of art. Among these classifiers, the proposed DBN is estimated as the most optimized classifier with high performance in gender classification based on ME. Table 4 clarified that the proposed DBN algorithm improves the efficiency using the two-evaluation metrics RMSEs and Err and display time per subject in training and test for two models. Tables 5 and 6 illustrate Audio/Video and Male/Female proposed classification models that are evaluated by performance matrices explained in the result section.

VI. CONCLUSION
Two experiments for BCI evaluation are presented for ME-EEG classification models (A/V) and (M/F). Signals are acquired for a combination of stimulations: ERP: P300 ((VEP) and (AEP)) and (ERD/ERS)). A new method (ICA-ADJUST) is used for signal preprocessing which characterizes artifacts through ICs in ICA decomposition by integrating stereotyped artifacts that are based on the spatial and temporal feature. This operation considerably improves EEG signals' features. Then, a robust structure of UIHBCI models is optimized and built by introducing a DNN that is constructed and pre-trained by RBMs which is called DBN.
The proposed algorithm proves that signal processing and feature extraction methods resulted in high-performance classification with multiple BCI models. Performance metrics were used to evaluate four popular BCI algorithms in addition to the proposed model SVM, LDA, Lower-Limb motor recovery, and Hybrid SSVEP-RSVP BCI. The evaluation results indicated that the proposed model outperformed the state-of-the-art models. This demonstrates the efficiency of the proposed DBN algorithm that improves the ME-EEG classification in UIHBCI systems.
NESMA E. ELSAYED received the B.S. degree in computer science from the Faculty of Computers and Information Systems, Mansoura University, Egypt, in 2011. She is currently pursuing the Ph.D. degree in computer science. She is an Assistant Lecturer with Delta University, Egypt. Her current research interests include brain-computer interaction multimodality and applications. MAGDI Z. RASHAD received the Ph.D. degree in computer science from the Faculty of Engineering, Cairo University, Egypt. He has worked as the Head of the Computer Science Department and the Vice Dean of the Faculty of Computers and Information Systems, Mansoura University, Egypt. He is currently a Professor of computer science with Mansoura University. He is the author of over 160 articles published in refereed international journals. His interests include artificial intelligence, pattern recognition, machine learning, image processing, cloud computing, and the Internet of Things (IoT). He has served as a Reviewer for various international journals, such as IEEE JOURNAL OF INTERNET OF THINGS (IoT) and Elsevier.
TAMER BELAL received the Ph.D. degree in neurology from the Faculty of Medicine, Mansoura University, in February 2009. He is currently a Professor of neurology with the Faculty of Medicine, Mansoura University. His interests include acute stroke management and intervention neurology transcranial magnetic stimulation, neuroimmunology, pediatric neurology, epilepsy and neurophysiology, neurodegenerative diseases, and dementia.
SHAHENDA SARHAN received the B.Sc., M.Sc., and Ph.D. degrees in computer sciences from Mansoura University, Egypt. She has been an Associate Professor in computer science with the Faculty of Computers and Information, Mansoura University, since 2012. She is currently delegated as an Associate Professor with the Faculty of Computing and Information Technology, King Abdul Aziz University. Her interests include artificial intelligence and computer networks. VOLUME 9, 2021