Discrimination of Genuine and Acted Emotional Expressions Using EEG Signal and Machine Learning

We present here one of the first studies that attempt to differentiate between genuine and acted emotional expressions, using EEG data. We present the first EEG dataset (available here) with recordings of subjects with genuine and fake emotional expressions. We build our experimental paradigm for classification of smiles; genuine smiles, fake/acted smiles and neutral expression. We propose multiple methods to extract intrinsic features from three EEG emotional expressions; genuine, neutral, and fake/acted smile. We extracted EEG features using three time-frequency analysis methods: discrete wavelet transforms (DWT), empirical mode decomposition (EMD), and incorporating DWT into EMD (DWT-EMD) at three frequency bands. We then evaluated the proposed methods using several classifiers including, k-nearest neighbors (KNN), support vector machine (SVM), and artificial neural network (ANN). We carried out an experimental paradigm on 28-subjects underwent three types of emotional expressions, genuine, neutral and fake/acted. The results showed that incorporating DWT into EMD extracted more hidden features than sole DWT or sole EMD method. The power spectral feature extracted by DWT, EMD, and DWT-EMD showed different neural patterns across the three emotional expressions at all the frequency bands. We performed binary classification experiments and achieved acceptable accuracy reaching a maximum of 84% in all type of emotions, classifiers and bands using sole DWT or EMD. Meanwhile, a combination of DWT-EMD achieved the highest classification accuracy with ANN in classifying true emotional expressions from fake expressions in the alpha and beta bands with an average accuracy of 94.3% and 84.1%, respectively. Our results suggest combining DWT-EMD for future emotion studies and highlight the association of alpha and beta frequency bands with emotions.


I. INTRODUCTION
Emotions are mental processes that are triggered by conscious or unconscious experiences [1]. Researchers have suggested that emotions and their dynamics influence cognition and behavior [2]. In particular, emotions tend to profoundly influence both the physical and psychological behavior of an individual [2]. Thus, emotion recognition is a critical factor for several domains such as human robot interaction, characterizing the level of interest on learning, measuring The associate editor coordinating the review of this manuscript and approving it for publication was Gang Wang . happiness and satisfaction, identifying the level of vigilance in road and safety, quantifying stress, and detecting patient's mental and physical states [1], [3]. Several methods have been proposed in the literature to evaluate emotions [4]- [7]. Self-reporting may be the most straightforward approach to assess emotions and emotional behavior. Such methods remain subjective and require the full attention of the user. Some more objective measures are facial expression, speech analysis, and analysis of physiological responses. In our day to day interactions with computers or people; we express our feelings in the form of various emotional behaviors. Our expressive spontaneous behavior tells a lot about how we feel. Facial expressions and speech tone analysis are the most widely used non-physiological signals for emotion detection. Hoque and his colleagues [8] have investigated automated facial expression analysis to discriminate between frustrated and delighted smiles. They were able to distinguish smiles under frustration and delighted stimuli with 92% accuracy. However, at times social circumstances may intimidate people into concealing the felt emotion. Thus, interpreting emotions from facial expressions or audio signatures may fail to reveal the true natural mental state of people [9]. In particular, false alarms are likely to occur, since these facial attributes are not always accompanied by emotions. Besides, fixing a camera is required to analyze facial expressions and a microphone to analyze audio signatures, which raises privacy concerns. Some studies have used multimodal approaches that combine speech, facial and physiological signals for emotion recognition [10], [6], [11], [12]. One thing to note is that facial and audio expressions may not reflect the intrinsic mental state of human beings whereas physiological signals reflect real emotions. In particular, changes in physiological signals related to emotional states are involuntary, and people are often unaware of them. One may argue that accurate measurement of felt emotions would need a more reliable assessment method. The question that arises here is whether we can differentiate between actual and acted emotional expressions using other objective means?
Basic human interactions rely upon expressional behaviors. For instance, smiling behavior implies happiness in general. However, a smiling person may not be happy. According to Hoque et al [8], smiling may even signify frustration. In particular, a smile expression is a universal and multifaceted expression. Analysis of emotions underlying expressions like smiling needs to be explored. Modalities that can recognize human emotional states in a more in-depth and accurate level help us to build better and more reliable human-computer interface systems. In this study, we attempt to use Electroencephalography (EEG) modality to find out whether an acted emotion can be differentiated from a genuine one. EEG is one of the most commonly used techniques to study brain functions and conditions with temporal resolution at millisecond.
In the literature, there has been many datasets used for emotion recognition studies. Publicly available emotion databases are SEED [13], DEAP [14], MAHANOBI-HCI [15], and MPED [9]. Many studies have analyzed EEG data from these datasets to identify and classify positive, neutral, negative, pleasant, and unpleasant emotions. However, analyzing EEG signals to obtain the most discriminative features in representing different emotions remain a challenging problem for emotion recognition systems.
There are many techniques to analyze EEG signals. Previous studies have used feature extraction techniques like time domain, frequency domain, time-frequency domain, and functional connectivity network to analyze EEG data [16]- [18]. Amongst the signal analysis methods used for studying the primary rhythms in the signals, spectral analysis is the most popular mode of analysis. EEG signals are highly non-stationary. Traditional methods like Fourier transforms fail to accommodate this non-stationary characteristic of EEG signal. One can explain that it is due to the global assumption of harmonic components while analyzing real-life signals. This, in turn, results in the broader Fourier spectrum and misleading energy frequency distribution [19]. Signal analysis using time-frequency space may aid to capture the fast dynamic changes in the neural spectra, unlike spectrum analysis methods.
Several research groups have reported various techniques for classifying emotions using EEG time-frequency methods. Discrete wavelet transform (DWT) and empirical mode decomposition (EMD) have been used to analyze EEG data in many works in the literature [2], [20]. DWT particularly is useful for capturing neural-specific domains in signals with regular frequency change [19], [21]. Emotion recognition based on the EMD method has not been investigated much even though it has been widely used for seizure detection and motor imagery classification. One recent study for emotion recognition from multidimensional information in EMD evaluated the performance on the Dataset for Emotion Analysis (DEAP) database [22]. The results showed that EMD outperforms time-domain methods benefiting from higher frequency content information [22]. One of the significant challenges in the application of EMD based methods in building a human-computer interface is its high computational time requirements [18]. Sweeney-Reed et al. [18] suggested that formulating a mathematical basis for EMD could give new insights into the underlying neural processes present in the EEG signal. They have pointed out that the frequency of oscillations related to particular cognitive behavior varies over time. This implies that EMD is useful in reflecting the underlying physiological process specific to the study of emotions.
It may be noted that, direct application of EMD to the raw EEG data may lead to intermittencies [23]. Besides, results from previous studies suggested that a combination of EMD and DWT methods can retrieve useful characteristics presented in nonlinear signals [24]. The use of wavelet transform prior to EMD on the a EEG signal is advantageous as it can detect and characterize singularities [23]. Munoz et al. [21] reported that EMD techniques resulted in signal softening and noise reduction [21]. Thus EMD can suppress noise accumulated from wavelet transformation [23]. The lack of adaptability from wavelet analysis can be compensated by incorporating EMD analysis. Ji et al. [25] used DWT and EMD based techniques for extracting non-linear features to improve the classification accuracy of motor imagery from EEG signals and showed promising results.
In the present work, we aim at studying the emotional expressions using EEG signals and machine learning approaches. Thus, we developed a novel protocol to induce genuine and acted emotional expressions within the two dimensional model of emotions; arousal and valence. Then, we analyzed these emotional expressions using three different VOLUME 8, 2020 feature extraction methods by utilizing DWT, EMD and incorporating DWT into EMD (DWT-EMD). Data analysis presented in this study involves the higher three frequency bands; theta, alpha and beta due to their highest associations with emotions. Furthermore, we assessed the feasibility of applicable emotional expressions detection through three different classifiers: K-nearest neighbors (KNN), Support vector machines (SVM), and artificial neural network (ANN). To the best of our knowledge, this is the first work on discriminating acted from the actual expressions using three time-frequency feature analysis methods on EEG data. Hence, our key contributions in this work are as follows: • Using EEG signal to differentiate genuine from acted emotional expressions.
• Experimental protocol and design to achieve the above.
• Introduction of a new database that contains EEG recordings of 28-subjects with acted and genuine smiles that can be used by other researchers in the area. The rest of the paper is organized as follows. In section II, we describe our experimental stimuli, data-collection and preprocessing methods. Section III presents the proposed features extraction methods, statistical analysis and classification models. Section IV presents the results and classification analysis. Section V discusses the results and provides suggestions for future research work. Finally, Section VI concludes this study.

A. SUBJECTS
Twenty-eight healthy students (20 males and 8 females; age 20±2) participated in this study. All participants had normal or corrected to normal vision. They had no history of neurological or psychiatric illnesses. The study procedures were explained to the participants and they signed an informed consent form prior to the experiment. All methods performed followed the Declaration of Helsinki. The experimental protocol was approved by the institutional review board, (IRB) of the American University of Sharjah.

B. EXPERIMENT PROTOCOL
In this study, the emotion eliciting stimuli comprised of 246 still images obtained from two online public image datasets. These include the Open Affective Standardized Image Set (OASIS) [26], and the Geneva Affective Picture Database (GAPED) [27]. Three types of image sets were chosen to conduct this study (116-funny images, 70-neutral images, and 60-one-plain image). Funny pictures involved pictures of human and animal babies; neutral pictures include pictures of nature, and the plain-images mainly depicted a picture of a plain book. All images used in this study were based on the valence-arousal scale [26], [27]. Images were displayed and presented on 19 inch LCD screen that was kept 50 cm away from the participant. Images presentation order was semi-randomized, with the condition that no currently viewed picture belonged to the same category as the previously rated one. Three different event markers were sent to mark the epochs/trials of each type of image stimuli. The participants were instructed to pose an acted smile once the target image (plain-image) appeared on the screen and to hit a respective keyboard response (i.e. 'Q'). This was in order to invoke acted/fake emotion in the subject. In addition, all participants were asked to hit letter ''P'' or ''N'' once and only whenever they felt their feeling had changed and have to act to certain emotional expressions in the form of a true smile (hitting the letter 'P') and neutral expression by (hitting the letter 'N'), respectively. There were a total of 246 trials in this experiment as shown in FIGURE 1. Each trial has a one-second drift check followed by two-seconds of emotion stimulating image. This entire experiment lasted about 13 minutes, and the number of trials varied between subjects depending on their response speed. All the trials were labeled according to the response from the participant and only successful trials that induced genuine and acted smiles were considered for the analysis.

C. DATA ACQUISITION AND PREPROCESSING
The EEG data was recorded using 64 Ag/AgCl scalp electrodes arranged according to the standard 10-20 system (ANT waveguard system and ASA Lab 4.9.2 acquisition software, ANT Neuro, the Netherlands). The EEG data was sampled at 500 Hz. The impedances of all EEG electrodes were maintained below 10 k , and were referenced to the left and right mastoids, M1 and M2. FIGURE 2 shows an example of the experimental setup and data acquisition layout.
The acquired EEG data was preprocessed using the EEGLAB toolboxes (9.0.4) [28] with custom scripts developed in [29], [30]. The eye blinks were detected through visual inspection and were discarded manually and using independent components analysis (ICA) method available in EEGLAB. The components representing artifacts, such as eye blinks, eye movements, and muscular activities were removed, and the remaining components were used to reconstruct the clean EEG signals. Typically, only one or two independent components relevant to eye blinks or eye movements were removed for each subject. All EEG signals were bandpass filtered using a finite impulse response (FIR) filter with 0.1 Hz to 40 Hz bandwidth. The power line interference was removed using a 50 Hz notch filter. The EEG data was re-referenced to the computed average reference. The data mean subtraction was performed to remove baseline  and DC offset. The clean signals were then segmented into epochs with a length of 1100 ms. This resulted in 230 epochs corresponding to three types of image stimuli. For all the three types of emotion, features were extracted using an equal number of epochs (we used 60 epochs per type of emotion with a total of 180 epochs).

III. FEATURE EXTRACTION
In this work, we propose to utilize three different algorithms for feature extractions. The methods are (1)-Discrete Wavelet Transform (DWT), (2)-Empirical Mode Decomposition (EMD) and (3)-combination of Discrete Wavelet Transform and Empirical Mode Decomposition (DWT-EMD). The following subsections describe the implementation of the methods for feature extraction. To study the cognitive processes, we analyze oscillatory activity between the frequency range 4-30 Hz (i.e. theta, alpha and, beta bands). EEG frequency components are found to reflect changes in their power spectra. FIGURE 3 shows the proposed framework for feature extraction and emotion classification.

1) DISCRETE WAVELET TRANSFORM (DWT)
The clean EEG signals were decomposed into seven levels using discrete wavelet transform (DWT). The Daubechies 4 (Db4) wavelet family was employed in this study due to its near-optimal time-frequency localization and similarity to the EEG signal waveform [31]. At each level, the signal is down sampled by a factor of two.
The subset of wavelet coefficients corresponding to theta (DWT decomposition level 6), alpha (DWT decomposition level 5), and beta bands (DWT decomposition level 4), were used for feature extraction. The mean power of EEG signals in the three frequency bands for each electrode was computed using a moving time-window of 1.1 seconds, according to (1) as suggested by [30]: where P j is the EEG mean power, x j (n) represents the segmented EEG signals in theta band at j = 1, alpha band at j = 2j = 2, and beta band at j = 3 and N is the length of the EEG signal. This gives a total of 11160 features per subject (corresponding to 60-epoch * 62-electrode * 3-frequencybands). The significant features in each frequency band were then used as an input to the machine learning classifiers.

2) EMPIRICAL MODE DECOMPOSITION
EMD is used to decompose the clean EEG signals into a finite number of intrinsic mode functions (IMFs) without any prior definitions, unlike predictive methods [20]. The EMD technique is based on direct energy extraction in time scale. Each IMF represents different frequency components of original signals and satisfies two conditions: • The difference between the number of extrema and the number of zero crossings is at most one • Given any point, the mean value of the envelope defined by the local maxima and local minima is zero. Once the iterative processes of EMD satisfy the two conditions, the final function of EMD is represented in the time domain as given in (2). The local energy and the instantaneous frequency derived from the IMF through the Hilbert transform gives the full energy distribution of the data. This is ideal for nonlinear and non-stationary data analysis [19].
EMD of the EEG signal X for each frequency band (theta, alpha, and beta) is given as follow.
where c i is the i th IMF component repeated N times and r is the residue component.

3) DWT-EMD MODE
In this method, prior to EMD, the EEG data was first subjected to wavelet transformation to split them into a set of narrowband signals. Then appropriate subband signal is selected to be decomposed into an intrinsic mode function with a frequency that is more concentrated [32]. The use of wavelet transform prior to EMD on a clean EEG signal is advantageous as it can detect and characterize singularities [24]. In line with this, we used Daubechies 4 (Db4) wavelet to decompose the clean EEG signal into levels corresponding to three frequency bands such as theta, alpha and beta. Then we applied EMD on the three decomposed frequency bands to obtain their IMFs components, as demonstrated in (2). From the first three IMFs components, we then extracted the mean power features using a time-window of 1.1 second, similar to (1). This resulted in 11160 features per subject (corresponding to 60-epoch * 62-electrode * 3-frequency-bands). Significant features were then used as an input to the machine learning classifiers to distinguish between the three types of emotion. Meanwhile, the averages of all features across subjects were then represented in the form of topography covering the scalp at 62 electrodes.

A. FEATURE DIMENSION REDUCTION
Feature dimension reduction is used to select the most applicable features from the feature set. This is usually used prior to machine learning classifiers to achieve optimum performance. A variety of feature dimension reduction methods have been introduced in the literature, such as paired-sample t-test, Fisher distance and mutual information [33], [34].
In our study, the paired-sample t-test is used as the feature dimension reduction technique. The paired-sample t-test was performed to test the significance of the power features in EEG signals between true, neutral and fake emotions. Before conducting the t-test, we used the Kolmogorov-Smirnov test to check if the data is normally distributed [35]. The p-value of paired-sample t-test indicates the significant difference in two sample groups. We used the p-values of less than 0.05 to select the features for emotion recognition.

B. CLASSIFICATION AND PERFORMANCE ANALYSIS
To distinguish between the three emotions, we employed multiple classifiers. According to literature [36], the most commonly applied classifier techniques when PSD features serve as the input is SVM. Hence we used SVM as one of the methods to classify emotions. However, study in [37] applied artificial neural networks (ANN) to recognize positive, neutral, and negative emotions yielded higher accuracies than SVM and KNN techniques. Thus, we chose to deploy all the three classifier techniques KNN [38], SVM [39], and ANN [40] and test which technique works best for our EEG dataset. The parameters used for each classifier are summarized in TABLE 1. These parameters were obtained with validation on the training dataset. At this level, we classified true smile from fake smile, true smile from neutral expression and fake smile from neutral expression. In each classifier, we performed subject dependent classification with 5-fold cross-validation. The selected features in each subject were randomly split into five equally size subsets. For each of the subsets, we trained the classifiers using four subsets while the testing was done using the rest one-subset. To obtain all samples predicting labels, we repeated this procedure 5 times so that each subset is used for validation. We used the following metrics for evaluating the performance of the classifiers: accuracy, sensitivity, and specificity. The classification accuracy was calculated as the percentage ratio of correctly predicted samples to all samples in the data set. The sensitivity calculates the percentage of true positive (TP) cases that are correctly predicted out of all the true positives  Fake Emotion-The topographical maps for fake emotion show a very lower cortical activation pattern compared to the true and neutral emotions. Nevertheless, higher activation is observed in the right prefrontal, right temporal and left parietal.
By using features from the EMD method, the observations from the corresponding topographical maps in FIGURE 5 are as follows: True emotion-In beta band, most of the electrodes in the right hemisphere show higher activation compared to the one across the left hemisphere. Specifically, the right parietal and right occipital. Meanwhile, the alpha band shows high activation on the right hemisphere and left prefrontal electrodes. The higher activation in alpha band is located in the prefrontal areas. For theta band, only a few electrodes  show the highest activations located at the right frontal and right parietal regions.
Neutral Emotion-In the beta band, the right hemisphere and central parietal electrodes show higher activations compared to other regions. In alpha band, the left prefrontal and right parietal and central electrodes show higher activations. In theta band, right frontal and left front-temporal show high activations compared to other regions.
Fake Emotion-Across the beta band, only right parietal and occipital electrodes show higher activation energy. In the alpha band, left prefrontal and right temporal electrodes demonstrate higher activation energy. Moreover, in the theta The overall result from the three feature extraction methods is that, the DWT-EMD showed higher activations in all types of emotion and frequency bands compared to the sole DWT or EMD methods. These higher activations demonstrated by DWTEMD indicate that, the method extracted more hidden features that cannot be extracted by sole DWT or EMD methods. It was also shown that, the cortical activations shifted between brain regions with type of emotions indicating the effectiveness of image stimuli in inducing different emotions. The higher cortical activations within the type of emotions and frequency bands are consistent across the three methods of analysis. VOLUME 8, 2020

B. CLASSIFICATION PERFORMANCE
We evaluated the performance of the three proposed feature extraction methods using KNN, SVM, and ANN. The mean classification accuracy, sensitivity and specificity with the standard deviation across subjects in all the frequency bands and type of emotions are summarized in TABLE 2. As mentioned earlier, we conducted binary classification across three cases (case 1: true smiles vs. fake smiles, case 2: true smiles vs. neutral expression and case 3: fake smile vs. neutral expression). The features extracted by DWT technique exhibited maximum accuracy in classifying true smile from fake one with ANN classifier at the beta band (accuracy 68.6%, sensitivity 68.5%, and specificity 68.7%). Meanwhile, the highest performance accuracy attained from empirical mode decomposition method was in the case 1 with SVM classifier in the beta frequency band (accuracy 84.1%, sensitivity 81.3%, and specificity 86.8%). And finally, it can be seen that the application of discrete wavelet transform prior to empirical mode decomposition has significantly improved the classification performance. Maximum accuracy was found in the case 1 itself using ANN classifier in the alpha band (accuracy 94.3%, sensitivity 92.9%, and specificity 95.6%).
The results of the classifier performance across the three methods was validated statistically by comparing the (i) DWT to EMD, (ii) DWT to DWT-EMD method, (iii) EMD to DWT-EMD method. It was found that the DWT-EMD significantly outperforms the sole DWT and sole EMD methods, (p<0.01) in most bands and classifiers.
Besides, EMD outperformed DWT in most of the classifiers and bands as shown in TABLE 2. Looking at the type of classifiers, the ANN classifier performed better than the other classifiers and the highest accuracy was observed in the alpha frequency band. It should be noted that the emotion recognition is mostly associated with higher frequency bands like alpha and beta which is why the performance accuracy was high in the alpha band. It is worth noting that the ANN classifier performs better compared to SVM or KNN due to sufficient training data as suggested in [16]. This may be one of the reasons why ANN classifier outperformed the other classifiers in this study.

V. DISCUSSIONS
The goal of this study was to discriminate genuine and acted expressions using EEG signals. For this purpose, an experimental paradigm was designed to acquire the brain signals associated with three emotional expressions: -true, fake, and neutral. Emotion-specific studies greatly rely on the choice of emotional stimuli. Thus, we chose static images as visual stimuli. The induced emotional expressions were investigated by extracting features using three different time-frequency analysis methods from the acquired EEG signals. Moreover, how well these emotions could be distinguished was evaluated using machine learning with multiple classifiers. The achieved results predict that it is possible to distinguish between true, neutral, and fake emotional expressions using EEG signals. To the best of our knowledge, this is the first study investigating emotions in the form of smile expressions using EEG signals and machine learning approaches.
In this paper, we reported the neural activation patterns associated with three different emotional expressions represented by their topographical maps. This kind of distribution maps has been plotted to give clear idea of active EEG electrodes under each type of emotions.
Besides, we performed subject dependent classification between the three emotions using ANN, SVM and KNN. The topographical maps in FIGURES 4 to 6 reveal that there exists a specific neural pattern associated with each type of emotional expressions. Our results showed that the mental process and cognition activities to emotional stimuli were related to prefrontal region of the brain. In particular, we found that the topographical maps showed higher frontal and parietal activity for true emotion. Our findings agree with the results obtained from the previous emotion studies that highlighted the associations of specific neural patterns with different emotions [22], [13], [41]. This is also inline with previous functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies, which demonstrated that remembering happy events is primarily associated with the activation of many areas, including anterior cingulate cortex, and prefrontal cortex [42].
It was also noted that across all the time-frequency analysis methods utilized for feature extraction, the maximum classification accuracy was achieved in the case of true smile versus a fake smile. This indicates that the type of image stimuli significantly modulated brain responses. By analyzing the classification performance across the three EEG frequency bands, it can be noted that classifiers performed slightly better in the alpha and beta frequency band, compared to the theta band. This is consistent with previous studies which revealed that emotional process were better reflected in the high frequency bands [13], [43].
Looking closely at the three-feature extraction methods employed, the DWT-EMD method has yielded the highest classification accuracy in the alpha band, with 94.3%, 92.4% and 83.8% for classifying true emotional expression from fake one using ANN, SVM, and KNN, respectively. Likewise, DWT-EMD in beta band showed comparable accuracy in classifying neutral emotional expressions from true and fake emotions. Meanwhile, DWT-EMD in theta band obtained the highest accuracy when classifying true from fake emotional expressions. The higher accuracy in alpha and beta band obtained in our study is consistent across all subjects as shown by small standard deviation. This finding is also consistent with previous studies on emotion classifications that demonstrated high accuracy at higher frequency bands [41], [44]. Thus, we recommend that higher frequency bands at alpha and beta are more useful in predicting emotional expressions from EEG signals.
Another noteworthy point is that there were significant improvements in the classification accuracies when incorporating DWT transform method prior to the EMD method. The features extracted by applying a discrete wavelet transform followed by empirical mode decomposition yielded better results compared to the application of EMD alone or DWT alone. In alpha band, we found that DWT-EMD outperformed DWT by 27.9%, 19.7%, and 24.7% using ANN, KNN and SVM, respectively. Similar improvements were also found across beta and theta frequency bands. Likewise, we found that DWT-EMD outperformed EMD at alpha band by 13.2%, 14.1%, and 12.9% using ANN, KNN and SVM, respectively. According to [45], applying DWT before decomposing the signal into IMFs helps to get devoid of wide frequency band coverage experienced while using EMD technique to the nonstationary signals like EEG. This may be one of the reasons for the performance enhancement for the classi?cation of EEG signals in our work. Thus, it can be inferred that indeed the application of DWT prior to EMD enhances the classifier performance and useful method for feature extraction using EEG signals.
This study has improved the emotion recognition rate significantly with DWT-EMD method. However, there are some limitations to it. First, our analysis is completely based on static images. Many previous works have used multiple stimuli for emotion recognition. Incorporation of different types of stimuli could provide a better understanding of the emotional expressions. Second, this study has been done in a single session. The possibility of monitoring the stability of EEG responses over different sessions should be explored. This would provide more insight of emotion recognition. Third, the feature dimensionality reduction /selection method used in this study was based on simple statistical analysis of t-test. The method is a univariate test and it does not consider multiple variables together and their possible interactions. Future studies should consider more robust feature selection methods such as correlation-based channel selection [46], bispectrum-based [47] and internal feature selection method of common spatial pattern [48]. In addition, to further improve the classification accuracy, researchers may combine multiple modalities such as EEG with eye tracking, EEG with functional near infrared spectroscopy or combination of the three modalities. These modalities contain complementary information and can be integrated to construct a more robust emotion estimation model. Finally, our study focused on positive emotions, the application of DWT-EMD method on other types of emotions like sadness, amusement etc. can be evaluated in future studies. VOLUME 8, 2020

VI. CONCLUSION
In this study, we attempt to differentiate between acted versus actual emotions using EEG signals. We developed an experimental paradigm to elicit three different emotional expressions: -true, fake/acted and neutral. To extract useful information to distinguish between these three types of expressions, we used time-frequency based techniques on EEG signals. And further applied machine learning algorithms:-ANN, SVM, and KNN. The attained results from the classification performance and power distribution suggest that there exists a difference in the way humans express genuine and acted expressions. The prefrontal, electrodes exhibited higher activation patterns in the power distribution map. We achieved a maximum classification accuracy of 94.3% using ANN classifier in the alpha frequency band using DWT-EMD method. From the three used classifier techniques for emotion recognition in our study, it was found that ANN and KNN classifiers yield best results. In short, we present the first work of its kind on differentiating fake from genuine expressions using EEG signals. We designed human subjective experiments and also collected a very valuable dataset that can be used by other researchers to advance the state-of-art in this new area.
HASAN AL NASHASH (Senior Member, IEEE) was with several biomedical engineering departments and hospitals, including the National University of Singapore, Johns Hopkins University, and the Rashid Hospital, Dubai. He is currently a Professor, an Interim Director with the Biosciences and Bioengineering Research Institute, and a Former Chair with the Department of Electrical Engineering, American University of Sharjah. He has designed and developed several electronic instruments to measure various biodynamic parameters. He led the effort to establish the M.S. Graduate Program in biomedical engineering. He has played an active role in organizing several biomedical and electrical engineering conferences. He is leading the effort to establish the Biosciences and Bioengineering Research Institute, AUS. He is the author of more than 100 journal and conference papers and five book chapters. He holds two issued U.S. patents. His main research interests include neuroengineering, signal processing, and microelectronics. He is a Senior Member of the former Middle East and Africa Representative with the IEEE-EMBS Administrative Committee. VOLUME 8, 2020