Neurofeedback Training With an Electroencephalogram-Based Brain-Computer Interface Enhances Emotion Regulation

Emotion regulation plays a vital role in human beings daily lives by helping them deal with social problems and protects mental and physical health. However, objective evaluation of the efficacy of emotion regulation and assessment of the improvement in emotion regulation ability at the individual level remain challenging. In this study, we leveraged neurofeedback training to design a real-time EEG-based brain-computer interface (BCI) system for users to effectively regulate their emotions. Twenty healthy subjects performed 10 BCI-based neurofeedback training sessions to regulate their emotion towards a specific emotional state (positive, negative, or neutral), while their EEG signals were analyzed in real time via machine learning to predict their emotional states. The prediction results were presented as feedback on the screen to inform the subjects of their immediate emotional state, based on which the subjects could update their strategies for emotion regulation. The experimental results indicated that the subjects improved their ability to regulate these emotions through our BCI neurofeedback training. Further EEG-based spectrum analysis revealed how each emotional state was related to specific EEG patterns, which were progressively enhanced through long-term training. These results together suggested that long-term EEG-based neurofeedback training could be a promising tool for helping people with emotional or mental disorders.

Abstract-Emotion regulation plays a vital role in human beings daily lives by helping them deal with social problems and protects mental and physical health. However, objective evaluation of the efficacy of emotion regulation and assessment of the improvement in emotion regulation ability at the individual level remain challenging. In this study, we leveraged neurofeedback training to design a realtime EEG-based brain-computer interface (BCI) system for users to effectively regulate their emotions. Twenty healthy subjects performed 10 BCI-based neurofeedback training sessions to regulate their emotion towards a specific emotional state (positive, negative, or neutral), while their EEG signals were analyzed in real time via machine learning to predict their emotional states. The prediction results were presented as feedback on the screen to inform the subjects of their immediate emotional state, based on which the subjects could update their strategies for emotion regulation. The experimental results indicated that the subjects improved their ability to regulate these emotions through our BCI neurofeedback training. Further EEG-based spectrum analysis revealed how each emotional state was related to specific EEG patterns, which were progressively enhanced through long-term training. These results together suggested that long-term EEG-based neurofeedback training could be a promising tool for helping people with emotional or mental disorders.
Index Terms-Emotion regulation, neurofeedback, electroencephalogram (EEG), brain-computer interface (BCI), neural pattern Ç 1 INTRODUCTION E MOTION is essential for the human experience and helps people adapt to various challenges and needs during their daily lives [1], [2]. Emotion regulation refers to the processes by which individuals modulate the trajectory of their emotions, consciously, or unconsciously, to respond to specific demands [3], [4]. The maintenance of a suitable emotion for a particular situation plays a significant role in protecting mental and physical health and improves the quality of social relationships [5], [6], [7]; however, individuals with difficulties in emotion regulation are at high risk of suffering from mental disorders [8]. As the level of social stress has increased in recent years, an increasing number of people are experiencing emotional disorders [9]. Therefore, it is critical to establish an effective way for these people to enhance their emotion regulation abilities. Additionally, the neural mechanism underlying emotion regulation remains unclear and requires further research.
In recent years, neurofeedback has arisen as an innovative tool for teaching brain self-regulation, including emotion regulation. To date, the majority of electroencephalography (EEG)-based neurofeedback training methods for emotion therapy function by regulating single-frequency band EEG activity in specific brain regions or EEG power differences between hemispheres [10], [11], [12], [13]. In these studies, no clear emotional states were displayed, and individuals simply regulated specific biomarkers. EEG-based brain-computer interfaces (BCIs) provide a feasible way to decode personal emotional states in real time and help individuals regulate their emotions. Previous studies have demonstrated that EEG-based BCI can be used to recognize two types of emotions, positive and negative, in both healthy subjects and patients with disorders of consciousness [14], [15]. Closedloop BCI training has also been implemented to help mediate these two emotional states, and the subjects achieved greater performance in regulation through two training sessions [16]. Although BCIs have great potential for improving emotion regulation, further development is needed to make EEG-based BCIs capable of regulating multiple emotional states. First, since emotional presentation is a highly complex cognitive process, methods need to be focused on modulating neural activities in multiple brain regions. Second, evaluation methods that promptly measure the efficiency of emotion regulation need to be improved, and the training effect of these systems requires verification through additional experiments and long-term observation. Finally, the neural patterns underlying different emotional states and the related regulatory mechanisms remain poorly understood and need to be further explored.
In the present study, we proposed a novel EEG-based BCI system involving a long-term neurofeedback training method that teaches subjects how to regulate their emotional states. Specifically, we describe an experiment that includes 10 BCI training sessions, during which subjects are asked to regulate their emotion to a specific state (positive, neutral, or negative). In certain sessions, corresponding video clips are displayed to help the subjects maintain their target emotions; in other sessions, the subjects evoke the corresponding emotions by themselves without external stimuli. Throughout all sessions, the subjects' current emotional states are provided to them through visual feedback, which is calculated in real time using collected EEG signals. Based on this feedback, the subjects could effectively regulate their emotions and observe the effects of this regulation in real time. After a training session, online accuracy is calculated based on the results of the subject's emotion regulation in each trial. Twenty subjects participated in our experiment and achieved a better average online accuracy over the last several training sessions than in the first session, indicating that our system was effective in helping the subjects learn to regulate their emotions. Furthermore, the topographies of neural power for each frequency band showed that proper emotional patterns were evoked and enhanced through this long-term training process, which may be related to the underlying brain mechanisms of emotion regulation.
This paper is organized into six sections. Related works are presented in Section 2. Method and materials are described in Section 3. Experimental results are presented in Section 4. Discussions are available in Section 5. Conclusions are presented in Section 6.

RELATED WORK
Emotion regulation training can include psychological therapy and neurofeedback treatment, both of which are sometimes accompanied by pharmacotherapy. Psychological therapies, such as cognitive behavioral therapy (CBT) and psychodynamic interpersonal therapy (PIT), are broadly used in clinical settings and have certain therapeutic effects on patients with depression [17]. CBT treatments establish connections between an individual's thoughts or attitudes and unpleasant emotions and teach individuals to better discriminate their emotional reactions [18]. PIT treatments teach patients with psychological disorders to increase their awareness of underlying emotions through insight and selfunderstanding; these treatments are more effective in producing enduring changes in the patient [19]. Other studies promote treatments using pharmacotherapy, although the side effects from long-term medication use still need to be studied [20]. Furthermore, these methods may not provide sufficient feedback for patients to objectively evaluate the improvement in their abilities. Feedback can provide effective guidelines to the subjects and help them take a more active role in achieving self-control [21].
Functional magnetic resonance imaging (fMRI) is commonly used as the basis of neurofeedback training due to its high spatial resolution [22]. Target sites, such as the anterior insula, amygdala, and anterior cingulate cortex (ACC), which have been suggested to be associated with cognitive emotion regulation, are successfully upregulated or downregulated in both healthy people and patients with depression. In addition, regulation of regions in the frontal lobe, such as the lateral prefrontal cortex (LPFC) and orbitofrontal cortex (OFC), has been found to cause significant longterm decreases in anxiety, especially under neurofeedback conditions [23], [24], [25]. Asymmetry of the prefrontal cortex (PFC) and its connection with the amygdala are also common biomarkers for emotion regulation feedback training and have demonstrated increased performance in previous studies [26]. Furthermore, dysfunction of the PFC may contribute to attention biases and stimulate negative emotions in individuals with anxiety [27]. Based on these findings, functional near-infrared spectroscopy (fNIRS)-based neurofeedback training has also been implemented for patients with mental disorders [28]. However, both fMRI and fNIRS signals have low temporal resolution and therefore may be insensitive to transient changes in brain activities during the emotion regulation process. In addition, due to the poor portability and high cost of MRI scanners, it is difficult to conduct long-term fMRI-based treatments with a large number of participants and translate the findings to both clinical settings and daily life.
Electroencephalography (EEG)-based neurofeedback training methods also provide a feasible way to decode personal emotional states in real time and help individuals regulate their emotions. Some EEG-based neurofeedback training methods for emotion therapy function by regulating single-frequency band EEG activity in specific brain regions, such as alpha activities in the frontal cortex or occipital cortex and theta activities [10], [11], [13]. Alpha and theta enhancement have been shown to be useful for improving anxiety, whereas alpha suppression might increase anxiety [29]. Individuals can use these training methods to reduce the impact of their negative emotions in their daily lives, and some patients with mental disorders can improve their clinical symptoms. In addition, EEG frontal asymmetry has been found in emotional presentation, such that a decrease in right hemisphere activities leads to a reduction in negative affect and anxiety. Based on this neural pattern, neurofeedback training for emotion regulation has been implemented for healthy people and patients with mental disorders [12], [30]. However, in these studies, no clear emotional states were displayed, and individuals simply regulated specific biomarkers.

Subjects
Twenty healthy male adults (average age 22.1 years) with normal or corrected-to-normal visual acuity and normal hearing participated in the experiment. The subjects were all right-handed according to the Edinburgh Handedness Inventory. None of the subjects had prior experience with emotion-related BCI experiments. Each subject was provided with a description of the whole experimental procedure and signed an informed consent form before the experiment. This study was approved by the Ethics Committee of Sichuan Provincial Rehabilitation Hospital (approval number: CKLL-2018008).

Stimulus Materials
In our experiment, we first selected 450 video clips, each lasting more than 1 minute (150 clips each of positive, neutral, or negative emotion). The criteria for selecting the clips were as follows: (a) the videos were easy to understand without additional explanations; (b) each video clip could effectively elicit a single desired emotion, and (c) each video clip had a relatively complete plot. These video clips were all encoded in the WMV format with a frame rate of 30 frames per second and a resolution of 1920 Â 1080 pixels. Fifty volunteers, none of whom were the subjects in the BCI experiment, were asked to assess their emotional arousal and valence rating on modified Self-Assessment Manikin (SAM) with 9-point scale [31], and select a keyword (i.e., positive, neutral, or negative) while watching each video clip that we had preliminarily selected. The SAM rating scales for arousal rating and valence rating are shown in Figs. 1a and 1b, respectively. In addition, these volunteers identified the timestamps of each video clip that could easily evoke corresponding emotions. We chose video clips with a relatively large number of timestamps and discriminative SAM ratings scales among the three emotional states (i.e., high valence for positive emotion, low valence for negative emotion, low arousal and balanced valence for neutral emotion), and edited them to be either 30 seconds or 60 seconds long according to the distribution of the timestamps. We performed the average SAM ratings of the selected video clips in the valence-arousal space, which is shown in Fig. 1c. Finally, we obtained 300 video clips (100 clips for each emotion) lasting 30 seconds to be used for the calibration runs and 180 video clips (60 clips for each emotion) lasting 60 seconds for the regulation runs. Moreover, to eliminate the interference of sound, we matched the audio power levels across all video clips by adjusting for total energy. After the experiments, twenty subjects also assessed their emotional arousal and valence rating on the SAM rating scales for each video clip, and the averaged assessment results were shown in Fig. 1d. The rating results did not differ significantly between subjects participating in our experiments and volunteers choosing video clips (p > 0:05 for all comparisons, T-test, df ¼ 68, t ¼ 0:0550 for positive valence, t ¼ 1:0002 for positive arousal, t ¼ 1:3811 for neutral valence, t ¼ 0:7334 for neutral arousal, t ¼ À0:4832 for negative valence, and t ¼ 0:9992 for negative arousal), which indicated that these video clips were suitable for evoking corresponding emotional states.

Data Acquisition
During the experiment, EEG signals were acquired by a NuAmps amplifier device (Neuroscan Inc., Australia). A 32-channel EEG cap was used, with electrodes placed according to the extended international 10-20 system. The right mastoid (A2) was set as the reference, and the ground electrode was positioned on the forehead. The impedances of all electrodes were controlled to be under 5 kV. The raw EEG signals were sampled at 250 Hz. During the experiment, EEG signals from 30 channels (excluding the bilateral mastoid channels (A1 and A2)) were used for data processing.

Graphical User Interface (GUI) and Experimental Paradigm
The experimental protocol is displayed in Fig. 2; our experiment included 10 BCI training sessions, each of which consisted of a calibration run and a regulation run. Each subject participated in a 10-day experiment conducted over 5 weeks (2 days of training sessions per week, 1 session per day).
Prior to the experiment, the subjects were asked to practice switching their emotional states over a short period of time to adapt to the experimental process. During the BCI training sessions, the subjects were seated in front of a 23-in LED computer monitor positioned 50 cm away and were asked to gaze at the screen and follow the instructions displayed on it. In addition, the subjects were instructed to stay as still as possible during experimental tasks to minimize body movement artifacts. At the end of each training session, we asked subjects which strategies were employed to evoke their emotions and whether their emotional states were truly evoked. For each calibration run, the subjects were exposed to three types of emotional stimuli. Thirty video clips each lasting 30 seconds (10 for positive, 10 for neutral, and 10 for negative emotions) were presented randomly, and the subjects were asked to maintain the target emotional state according to a cue displayed prior to presentation of the video clip. Between each video, the subjects were asked to relax and then press the space key to start the next video. After the presentation of all 30 calibration video clips, a three-class support vector machine (SVM) classifier was trained using the recorded EEG data, which was used in the following regulation run.
Each regulation run consisted of 30 trials in which the subjects were asked to complete specific emotion regulation tasks. The order of the 30 total emotional tasks (10 for positive, 10 for neutral, and 10 for negative emotions) was randomly generated. To test whether subjects could improve their emotional regulation in a way that could be generalized outside of the specific movie viewing task, we designed training sessions with and without external video stimuli. Specifically, for sessions 1 and 8-10, the subjects were tasked with regulating their emotions in the absence of video stimuli. As shown in Fig. 3a, each trial began with a cue displayed on the screen for 5 seconds, which informed the subjects the emotional state that should be maintained in the trial. A fixation cross ("+") was then displayed in the center of the screen for 60 seconds, during which time the subjects were to regulate their emotional states in accordance with the cue. Each trial ended with a 10-second rest that allowed the subjects to relax and prepare for the next trial. The GUI for the regulation run is shown in Fig. 3b, where an emotional face and three color bars are displayed on the right side to represent the feedback. From left to right, the red, green, and blue bars represent positive, neutral, and negative emotional states, respectively. The emotional face presented corresponds to the tallest color bar below it. In each trial, the heights of the bars were updated every 2 seconds starting 20 seconds after the initial presentation of the fixation cross. The bar height changed according to the SVM prediction results using the EEG data recorded over the prior 20 seconds for data processing, which reflected subject's current emotional state. Prior to the regulation run, the subjects were asked to maximize the height of the target bar without having been given specific strategies to do so. As shown in Fig. 3c, the protocol of the regulation run in sessions 2-7 was the same as that in sessions 1 and 8-10, except that an emotional video clip lasting 60 seconds was presented on the screen instead of the "+" symbol in each trial. As shown in Fig. 3d, the corresponding video clips were displayed to help subjects maintain their target emotional states, and the feedback was presented in the same manner as described above.

Control Group
Twenty healthy male adults (average age 23.6 years) with normal or corrected-to-normal visual acuity and normal hearing participated in the control experiment. These subjects were all right-handed according to the Edinburgh Handedness Inventory, and none of them had prior experience with emotion-related BCI experiments. They were also asked to complete ten experimental sessions, where the experimental procedure was identical to that of the experimental group described above, except that the feedback was not provided to the subjects. The data acquisition protocol and data analysis pipeline were also the same as for the experimental group.

Preprocessing and Feature Extraction
The raw EEG signals were first filtered with a 10th-order Butterworth bandpass filter between 0.1 and 70 Hz, followed by 50 Hz notch filtering to remove power-line noise. The filtered EEG signal of each trial was then baseline corrected by subtracting the mean value of the 3-second EEG signal prior to stimulus onset. After that, a moving time window was used to partition each filtered trial into multiple data segments. The width of this window was set to 20 seconds, and the stride size was set to 2 seconds. As a result, we obtained 150 data segments (30 trials Â 5 segments per trial) for each calibration run and 600 data segments (30 trials Â 20 segments per trial) for each regulation run.
We extracted the differential entropy (DE) [32] as the feature from the EEG signals. The calculation method was described in previous studies [15]. For fixed-length time series, the DE is equivalent to the logarithmic power spectral density (PSD). Specifically, for each data segment, we first calculated the spectral power using a 512-point shorttime Fourier transform (STFT) with a nonoverlapping 1-second Hanning window for the data in each channel. Next, we computed the band-power values by averaging the power within each of the five canonical EEG frequency bands (delta 1-3 Hz, theta 4-7 Hz, alpha 8-13 Hz, beta 14-30 Hz, and gamma 31-50 Hz). The DE features were then calculated by taking the logarithmic transform of the band-power values. Subsequently, we concatenated the features of all 30 channels and obtained a 150-dimensional feature vector for each data segment. Each feature vector was then z-normalized such that the mean was 0 and the standard deviation was 1. This feature extraction process resulted in 150 feature vectors (50 feature vectors per emotional state) for each calibration run and 600 feature vectors (200 feature vectors per emotional state) for each regulation run.

Machine Learning Model Training
To calibrate the three emotional states of each subject, we trained a three-class SVM classifier with the linear kernel using the function "svmtrain" in the LIBSVM toolbox via the one-versus-one strategy [33]. This function integrates the predicted labels of three binary classifiers and obtains the final three-class predicted label by voting, where the class with the maximum number of votes from the binary classifiers is designated to be the final predicted label. In particular, in the event that two classes have an identical number of votes, this function selects the class appearing first in the array of storing class names, i.e., positive, negative, neutral. The 150 feature vectors extracted from each calibration run were used to train the classifier, which was subsequently used for online prediction in the subsequent regulation run. In addition, we obtained two reference values based on the classifier output that were used to determine the feedback strength during the regulation run. More specifically, as shown in Fig. 4a, for each feature vector, its discriminant function value for each of the two binary SVM classifiers with involved binary classes comprising the corresponding emotional state was calculated as a decision value, denoted as w T x þ b, where x represents the corresponding feature vector, w is the normal vector of the separating hyperplane, and b is the bias. The two decision values were then combined to yield a single decision value. For instance, in order to determine the decision values for neutral emotion, we first obtained decision value D3 of the neutral versus negative classifier and decision value D1 of the positive versus neutral classifier; the final decision value was then calculated as ðD3 À D1Þ=2. As a result, 50 final decision values were obtained for each emotional state, and the median and 95th percentile of these 50 decision values were determined as the first and second reference values for each emotional state, respectively. These reference values were used to decide the amount of increase for the feedback bars during the online regulation runs.

Online Prediction and Feedback
During the regulation run with online feedback, the EEG signals were preprocessed and analyzed via machine learning in real time. As shown in Fig. 4b, each new feature vector was fed into the trained classifier from the calibration run, and a predicted class label and corresponding decision value were obtained. These outputs were then used to determine the feedback. At each time step, the feedback bar corresponding to the predicted label increased. As such, for each trial, 20 feedback updates were generated, corresponding to the 20 data segments. We recorded the results of all 600 predictions (20 predictions per trial Â 30 trials) and counted the number of correct predictions, that is, the predictions whose labels were in line with the corresponding emotion that each subject needed to maintain. Then, we calculated the online classification accuracy by dividing the number of correct predictions by the number of total predictions. Moreover, the amount of increase for the bar was determined by comparing the decision value to the two reference values for the corresponding emotional state. If the decision value was less than the first reference value, the height of the bar would be increased by one grade; if the decision value was greater than the first reference value but less than the second reference value, the height of the bar would be increased by two grades; Otherwise, if the decision value was greater than the second reference value, the height of the bar would be increased by three grades.

Offline Analysis
To assess how the ability to discriminate between different emotional states varied across training sessions, we performed offline classification for every pair of emotional states based on the EEG signals collected in all 10 training sessions. Specifically, for each training session, the EEG signals in the calibration run and regulation run were preprocessed and feature vectors were obtained using the same methods as described above. This resulted in a 150-dimensional feature vector per 20-second data segment. Then, for every pair of emotional states (i.e., positive versus neutral, positive versus negative, and neutral versus negative), the corresponding feature vectors extracted from the calibration run were used to train a binary SVM classifier, and the feature vectors in the regulation run were used for prediction. As such, for each subject and session, we obtained three decoding accuracies corresponding to the three pairs of emotional states.
In addition to aggregating features across frequency bands, we also performed three-class and binary classification using features from each individual frequency band. Specifically, a 30-dimensional feature vector was extracted for each 20-second data segment per frequency band. The feature vectors from the calibration run were then used to train the three-class and binary classifiers, which were applied to the regulation runs to obtain the decoding accuracies for each frequency band.

Confusion Matrix
The confusion matrix reports the false positive rate (FPR), false negative rate (FNR), true positive rate (TPR), and true negative rate (TNR), allowing a more detailed analysis than the classification accuracy alone. Based on the offline classification results, we calculated the confusion matrix according to the three-class classification for the five individual frequency bands of each experimental session as well as the binary classifications for each experimental session.

EEG Topographies for Emotional Patterns
To further illustrate the changes in emotional neural patterns that occurred through emotion regulation in the longterm EEG-based neurofeedback training, the spatial topographies of the band-power features were assessed based on the different frequency bands using data from the regulation run in each experimental session. For example, to obtain the neural pattern of positive emotion in session 1, we averaged the feature vectors of the positive emotional state across all 20 subjects and all 20-second data segments from the positive emotion trials. This was repeated for each of the 3 emotional states and each of the 10 sessions. Finally, we normalized all of the averaged features to range from -1 to 1. More specifically, we first separated each feature vector into five parts corresponding to the five canonical EEG frequency bands. Then, for each frequency band, we implemented a min-max normalization method to normalize all features corresponding to the same frequency band across the 3 emotional states and 10 experimental sessions. Therefore, we obtained normalized features for all 3 emotions and 10 sessions in 5 EEG frequency bands. We plotted the topographies using the FieldTrip toolbox running on MAT-LAB (MathWorks, MA, USA) [34].

RESULTS
In this section, we present the results for the 10 online BCI training sessions and further offline analysis. During the BCI training sessions, the subjects were asked to regulate their emotional states while watching/not watching corresponding emotional video clips. Meanwhile, our BCI system monitored their emotional states in real time, calculated the accuracy of the subjects in maintaining the correct emotional states, and provided corresponding feedback. At the end of each training session, subjects reported that corresponding emotional states were truly evoked through external video stimulus or imagining. Together with our quantitative results, this suggested subjects could effectively evoke their emotions as the experiment progressed, indicating that their emotion regulation abilities were enhanced. We first present the online decoding accuracies for the BCI training sessions and the offline classification accuracies. Then, confusion matrices are provided for both the threeclass classification and the binary classifications. Furthermore, neural patterns associated with emotion regulation are presented at the end of the section.

Accuracies of Online Experiments and Offline Analysis
The accuracies of the 20 subjects in the experimental group in the regulation runs of the 10 training sessions are shown in Fig. 5 We also performed offline analysis for the experimental group to decode every pairing of the three emotional states, where classifiers were trained using the data from the calibration run. The classification results for the positive versus neutral, positive versus negative and neutral versus negative states are depicted in Figs. 6a, 6b, and 6c, respectively. Analogous to the online results, the accuracies of the binary classification in the last three sessions were significantly higher than those in the first session (p < 0:05 for all comparisons, paired-sample T-test

Confusion Matrix
To further study the frequency-based decoding performance in the three emotional states, we calculated the confusion matrix for the experimental group and the 10 sessions using features from each individual frequency band, as shown in Fig. 7. The diagonal entries in the confusion matrix represent the TPR of the corresponding emotional state. Two-way ANOVA with session and emotional state as factors was then performed on the TPRs for each frequency band. For the delta frequency band (Fig. 7a), there were no significant changes in the TPR across experimental sessions (df ¼ 9, F ¼ 0:47, p ¼ 0:8963, FDR corrected) and no significant differences among the TPRs of the three emotional states (df ¼ 2, F ¼ 1:84, p ¼ 0:1595, FDR corrected). However, the TPRs for the other four bands were significantly different among the emotional states (df ¼ 2; F ¼ 21:63 for the theta band, F ¼ 11:29 for the alpha band, F ¼ 11:55 for the beta band, F ¼ 23:27 for the gamma band; p < 0:05 for all comparisons, FDR corrected). Moreover, the TPRs calculated using the alpha, beta, and gamma bands were significantly different among the experimental sessions (df ¼ 9; F ¼ 3:11 for the alpha band, F ¼ 3:60 for the beta band, F ¼ 3:43 for the gamma band; p < 0:05 for all comparisons, FDR corrected), but the TPRs for the theta band were not significantly different (df ¼ 9, F ¼ 1:45, p ¼ 0:1651, FDR corrected). Specifically, as shown in Figs. 7b and 7c, the TPRs of the neutral emotional state using features from the theta band and alpha band were higher than those of the other two emotional states in the majority of experimental sessions. Moreover, as depicted in Figs. 7d and 7e, the TPRs of the positive emotional state using features from the beta band and the gamma band were significantly higher than those of the other two emotional states in  the majority of experimental sessions (p < 0:05, pairedsample T-test, FDR corrected). We also calculated the confusion matrices of the three binary classifications (positive versus neutral, positive versus negative, and neutral versus negative) for the 10 experimental sessions. As shown in Fig. 8, the diagonal entries in each confusion matrix represented the TPR of the corresponding emotional state. For each classification result, the TPRs of the last session were higher than those of the first session for sessions both with stimuli and without stimuli.

Neural Patterns of Emotion Regulation
We calculated the topographical map of power features by averaging the power features over all trials in the session and all subjects for each frequency band, emotion state, and experimental session. The topographical maps of the power features corresponding to the positive emotion state, neutral emotion state, and negative emotion state are depicted in Figs. 9a, 9b, and 9c, respectively.
For the neutral emotional state (Fig. 9b), alpha band activity was higher than that of the other frequency bands in the whole brain (p < 0:05 for all comparisons, df ¼ 19, t ¼ 4:2433 for delta, t ¼ 4:6134 for theta, t ¼ 6:8744 for beta, t ¼ 7:9178 for gamma, paired-sample T-test for average power of all 10 sessions and all electrodes). In addition, although alpha band power was not significantly increased in the last three sessions relative to the first session (p ¼ 0:0573; df ¼ 19; t ¼ 2:0237, paired-sample T-test for average power of all electrodes), the activities of the other frequency bands were comparatively lower. Specifically, the power of the gamma band was decreased significantly in the last three sessions relative to the first session (p ¼ 0:0474; df ¼ 19; t ¼ À2:1202, paired-sample T-test for average power of all electrodes).
For the negative emotional state (Fig. 9c), the beta and gamma band activities were significantly higher in the prefrontal cortex (Fp1, Fp2) and occipital cortex (O1, Oz, O2) than in other regions (p < 0:001; df ¼ 19, t ¼ 5:0695 for beta and t ¼ 5:5185 for gamma, paired-sample T-test between average power of all sessions in Fp1, Fp2, O1, Oz, O2 and power in the remaining channels), although the power in these regions did not significantly increase as BCI feedback training proceeded (p > 0:05 for paired-sample Ttest between average power in last three sessions and in the first session). Moreover, compared with sessions with video stimuli, the power of alpha band in the occipital cortex was comparatively higher in sessions without video stimuli. Significantly higher power was found in channels O1 (p ¼ 0:0139; df ¼ 19; t ¼ 2:7087), Oz (p ¼ 0:0103; df ¼ 19; t ¼ 2:8493), and O2 (p ¼ 0:0019; df ¼ 19; t ¼ 3:6049).
Taken together, certain neural patterns for the three emotional states were clearly induced and strengthened through our long-term BCI training. Significantly higher alpha activity was found in the neutral emotional patterns than in those of the other emotional states. For the beta band and gamma band, positive emotion was associated with more activity in the bilateral temporal gyrus, and negative emotion led to more activity in the frontal gyrus and occipital gyrus.

Efficiency of Our EEG-Based BCI for Emotion Regulation
Neurofeedback is an innovative tool for emotion regulation and has become the subject of broad study in recent years [12], [13], [25], [35]. Real-time feedback based on both EEG and fMRI was shown to be efficient in improving emotion regulation [12], [35], [36]. In neurofeedback training, neural biomarkers can be measured and presented back to subjects to help them take more active roles in performing self-controlling activities in specific brain regions [21]. In this study, we developed a novel method to help subjects regulate their emotional states through neurofeedback training with an EEG-based BCI. Compared with the control group without feedback, the experimental group achieved better performance as the experiment progressed, especially for sessions without video stimulus (Fig. 5). These results suggested that neurofeedback played an important role in enabling subjects to improve emotion regulation ability. Moreover, the decoding accuracies of each binary classifier also increased as the experiments progressed ( Figs. 6 and 8). In addition, TPRs of each emotional state for each frequency band were also increased relative to the first session and the last several sessions (Fig. 7). These results demonstrated that EEG-based BCI neurofeedback training is an effective way to help subjects regulate their emotional states and has potential as a form of emotion regulation therapy. Currently, the majority of EEG-based neurofeedback training methods for emotion regulation focus on band power in a specific brain region, such as alpha band power or theta band power in the occipital cortex [11], [29]. Other neurofeedback training methods are based on the power ratio between different brain regions, such as frontal asymmetry patterns in both the alpha band and high-beta band [13], [30], which is suggested to be a biomarker between positive and negative emotions. In these studies, subjects focused on modulating biomarkers and did not track for a specific emotional state. Instead, in this study, we labeled three types of emotions and separated the tasks for each emotion, depicting the feedback in a language more familiar to the participants. This also allowed us to isolate specific neural changes associated with the distinct emotional states.
In addition, our BCI algorithm predicted the subjects' emotional states online based on wideband features and all scalp areas. Subsequent offline analysis showed that the decoding accuracies also increased using features from each individual frequency band This design allowed for the tracking of more complex changes over time, instead of focusing on a single feature in previous studies. Moreover, the subjects could change their brain activities in multiple ways to regulate their emotions, providing greater flexibility while maintaining accuracy throughout the training process. Our results further demonstrated that emotional decoding was achieved with wideband EEG activity, which was consistent with previous studies [37], [38].
Our neurofeedback training also demonstrated the ability to support longer-term improvement in emotion regulation ability. The subjects in our study learned to regulate three emotional states under visual feedback through a long-term training process (10 sessions). The improvement in the subjects' emotion regulation abilities was reflected by the increasing accuracies in both the three-emotional state classification and the pairwise classifications. In contrast, a BCI study similar to ours used features from multiple canonical frequency bands to train subjects to regulate two emotional states (positive and negative) for only 2 training sessions, and 5 participants successfully modulated their emotions under musical feedback [16]. Comparatively, our study evaluated improvement over a longer time scale. Our results further implied that the subjects could employ proper strategies and improve their regulation skills for all three emotional states over the course of each session. Furthermore, we designed training sessions with and without external video stimuli, which allowed us to test whether the subjects had improved their emotional regulation in a way that could be generalized outside of the specific movie viewing task.

Neural Patterns of Emotion Regulation
Alpha activity has been reported to be associated with relaxation or idleness in previous studies and plays a role in emotion modulation [39], [40]. In our study, we observed that subjects achieved the highest TPRs in the neutral emotional task using features in this frequency band, evoking more alpha activity in neutral emotional tasks. This result further demonstrates that the alpha band plays an important role in mediating emotions and maintaining a relaxed state, which is in line with prior studies [41], [42]. Although alpha band power was not gradually enhanced as the number of training sessions increased, the power of the other four frequency bands decreased as the experiment progressed. This suggests that subjects might learn to maintain neutral emotion through inhibition of the activity of the other bands rather than through enhancement of alpha band activity.
Compared with those of alpha band, we found that the differences in neural patterns in the delta and theta bands were comparatively small between the three emotional states, despite previous studies having noted that these band activities were correlated with emotional presentation [43]. In our study, theta activity in the occipital gyrus increased during positive emotional tasks in sessions with video stimuli; however, this phenomenon was not found during neutral and negative tasks. Previous studies suggested that auditory and visual stimuli, including emotional stimuli, could generate theta oscillatory responses [44], [45]. This result implies that positive emotional videos might be more able to stimulate brain activity than the other two types of emotional stimuli.
The beta and gamma bands reflect emotional processing when receiving emotional stimuli [46], [47], and experimental tasks thought to induce emotional experience have been shown to increase gamma activity [48]. In our study, the subjects achieved higher TPRs using features in the beta or gamma band than in the other three bands. This finding is consistent with results from previous studies showing that power-based features were mainly related to high-frequency bands rather than low-frequency bands [32], [49]. This implies that high-frequency bands might play important roles in emotion regulation, and subjects improved at controlling their emotional state by altering their brain activities in high-frequency bands. These high decoding accuracies using high-frequency bands might also be caused by obvious differences in the neural patterns of the three emotional states. We found that these patterns gradually increased throughout the course of the experiment: positive emotion evoked greater beta and gamma activity in the bilateral temporal cortex, while negative emotion evoked activity in the frontal and occipital cortices. A similar positive emotional pattern was found in previous studies when subjects were exposed to emotional stimuli [32], which further supports the validity of our positive emotional patterns. However, earlier studies were unable to find consistent high-frequency patterns for negative emotion states. Frontal EEG oscillations were found during the emotion regulation process, and the role of the fronto-occipital network in emotion representation was demonstrated through transcranial magnetic stimulation and EEG studies [50]. Our study suggests that this network is more active in processing negative emotion than positive emotion.
Watching movie clips may introduce eye artifacts which might contribute to the reported neural patterns. To rule out this possibility, we performed offline artifact rejection for EEG data in session 10 for all 20 subjects. Specifically, Independent Component Analysis (ICA) was run on filtered data using the EEGLAB toolbox (Version 2021.1, [51]) with default parameters. Subsequently, all 30 independent components (ICs) for each subject were automatically classified by the ICLabel plugin (https://labeling.ucsd.edu/tutorial/overview) to identify eye artifacts. Components labeled as eye artifacts with greater than 70% confidence were then removed. Specifically, three subjects had 3 components, five subjects had 2 components, and nine subjects had 1 component labeled as the eye artifact and removed, respectively. The topographical maps of neural patterns after eye artifact removal are shown in Fig. S1. Although the band power in frontal areas for negative emotions decreased after the eye artifact removal, statistical analysis indicated that these power-based features for the three emotions in each frequency and channel did not significantly change (p > 0:05 for all comparisons, paired-sample T-test, FDR corrected for multiple comparisons). Therefore, these results demonstrated that our neural patterns were not due to eye artifacts and more likely reflected emotion states.
In addition, although subjects reported that corresponding emotions were indeed evoked during our experiments, the degree to which each emotion was evoked was not assessed. Along this line, questionnaires and scales such as the SAM rating were utilized in prior studies for subjects to evaluate their emotional states during experiments [52]. These more quantitative methods allow us to characterize emotional states from a dimensional perspective and will be employed in our future studies to rate subjects' emotional states. Furthermore, only male subjects were recruited in our experiments. Previous studies have shown significant gender differences in processing emotional stimuli [53], and gender differences in the EEG emotional response have also been reported [54], [55]. Therefore, our EEG-based neural patterns might be specific to male subjects. Further studies are warranted to determine whether our BCI-based neurofeedback training method could help female subjects regulate their emotions.

Limitations and Future Works
Our experimental results demonstrated the efficacy of our BCI system, which could help subjects regulate their emotions. Nonetheless, the current study has a few limitations. The subjects in this study were all healthy adults with normal emotion-related brain functions. In the future, we will extend our method to patients with emotion-related mental disorders, including anxiety and depression, and the potential therapeutic utility will be studied in relation to any changes observed in clinical symptoms. The effects of longterm BCI neurofeedback training on patients' emotion regulation abilities, daily lives and rehabilitation also need to be extensively assessed via longitudinal clinical and neuroimaging data. In addition, emotion presentation and regulation showed considerable individual differences. Therefore, our proposed method for emotion regulation warrants further validation in larger samples.

CONCLUSION
Emotion regulation is vital for humans' social lives; however, efficient emotion regulation tools and assessment of the improvement in emotion regulation ability at the individual level are still relatively lacking. In addition, the neural mechanisms and EEG patterns underlying emotion regulation are currently unknown. In this study, a real-time EEG-based BCI system for neurofeedback training was designed for subjects to effectively regulate their emotions. Our results showed that the subjects were able to improve their ability to regulate emotions after neurofeedback training, thus validating the efficiency of the proposed BCI neurofeedback proposal. Further EEG-based spectrum analysis revealed that emotion-related EEG patterns were progressively enhanced through long-term training.
Weichen Huang received the BS degree in information engineering from the South China University of Technology, Guangzhou, China, in 2017, where he is currently working toward the PhD degree in control science and engineering. His research interests include brain signal processing, pattern recognition, and BCIs.
Wei Wu (Senior Member, IEEE) received the PhD degree in biomedical engineering from Tsinghua University, China. He was a full professor with the School of Automation Science and Engineering, South China University of Technology, China, and a co-founder of Alto Neuroscience Inc., Los Altos, CA, USA. His research interests include neural signal processing, neural engineering and computational psychiatry, specializing in particular on developing statistical models and algorithms for the analysis and decoding of brain signals, and taking a multimodal approach to develop biomarkers for psychiatric disorders. He is an associate editor for Neurocomputing (Elsevier) and Neural Processing Letters (Springer), and a member of IEEE Biomedical Signal Processing Technical Committee.
Molly V. Lucas received the BS degree in psychology from Yale University, the MS degree in bioethics from Columbia University, and the PhD degree in neuroscience from Stanford University with a focus on machine learning analysis and TMS-EEG signal processing in psychiatric research. During her undergraduate and masters degrees, she worked using a range of neuroimaging techniques, especially fMRI and PET imaging. She is currently a data scientist in AI and digital health with Janssen Pharmaceuticals (Johnson & Johnson). She is currently a lecturer with Columbia, where she teaches graduate-level ethics courses covering AI, data science, and biomedical research.
Haiyun Huang received the BE degree in automation science and engineering from the South China University of Technology in 2015 and the PhD degree from the School of Automation Science and Engineering, South China University of Technology, Guangzhou, in 2021. His research interests include emotion recognition, brain signal processing, and noninvasive brain-computer interfaces.
Zhenfu Wen received the BE degree in automation science and engineering from the South China University of Technology in 2014 and the PhD degree from the School of Automation Science and Engineering, South China University of Technology, Guangzhou, in 2019. His research interests include machine learning and brain signal processing.
Yuanqing Li (Fellow, IEEE) received the PhD degree in control theory and applications from the South China University of Technology, Guangzhou, in 1997. Since 1997, he has been with the South China University of Technology, where he became a full professor in 2004. From 2002 to 2004, he was a researcher with the Laboratory for Advanced Brain Signal Processing, RIKEN Brain Science Institute, Saitama, Japan. From 2004 to 2008, he was a research scientist with the Laboratory for Neural Signal Processing, Institute for Infocomm Research, Singapore. His research interests include blind signal processing, sparse representation, machine learning, brain-computer interface, EEG, and fMRI data analysis.
" For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/csdl.