Explainable Deep Learning Model for EMG-Based Finger Angle Estimation Using Attention

—Electromyography (EMG) is one of the most common methodstodetectmuscleactivitiesandintentions. However, it has been difﬁcult to estimate accurate hand motions represented by the ﬁnger joint angles using EMG signals. We propose an encoder-decoder network with an attention mechanism, an explainable deep learning model that estimates 14 ﬁnger joint angles from forearm EMG signals. This study demonstrates that the model trained by the single-ﬁngermotion data can be generalized to estimate complex motions of random ﬁngers. The color map result of the after-training attention matrix shows that the proposed attention algorithm enables the model to learn the nonlinear relationship between the EMG signals and the ﬁnger joint angles, which is explainable. The highly activated entries in the color map of the attention matrix derived from model training are consistent with the experimental observations in which certain EMG sensors are highly activated when a particular ﬁnger moves. In summary, this study proposes an explainable deep learning model that estimates ﬁnger joint angles based on EMG signals of the forearm using the attention mechanism.

activities and intentions [4].Decoding EMG signals into finger or arm motions is one of the essential elements in the development of robotic prostheses, and there have been many studies in this area [5]- [9].It has been shown that EMG signals can be used to classify hand gestures with an estimation accuracy higher than 95% [10]- [12].Ramien et al. proposed a method of decoding EMG signals to classify individual and combined finger motions using the Bayesian data fusion approach that yielded an accuracy of 90% [13].Recent advancement of machine learning made possible to extract more complex EMG features, allowing for successful classification of more diverse hand gestures [14]- [16].However, the current machine learning models have a limitation in predicting hand gestures that were not learned before.To address this issue, there have been studies that predict finger joint angles directly from EMG signals rather than classifying discrete hand gestures [17]- [21].Afshar and Matusoka showed that the joint angles of the index finger could be estimated from the EMG signals of seven forearm muscles that contributed to flexion of the index finger [17].Shrirao et al. also tried to decode the EMG signals of the forearm to predict index finger joint angles [18].Further studies on predicting angles of multiple fingers have been conducted [19]- [21].Smith et al. succeeded in predicting the metacarpophalangeal (MCP) joint angles of the five fingers using a simple artificial neural network [19].Hioki et al. decoded muscle signals using only four EMG sensors and predicted proximal interphalangeal (PIP) joint angles of all five fingers [20].Ngeo et al. predicted 15 different finger joint angles (three joint angles in each finger) using a Gaussian process (GP) method from eight sEMG sensors [21].
However, these methods still have two major limitations.First, the machine learning algorithms do not provide an explainable artificial intelligence (AI) model, a mechanism to understand and interpret the predictions made by learning, making it difficult to trust the accuracy in general cases.Although predictive modeling is motivated to improve the model performance, a lack of explainability means that the model is not sufficient to make qualitative evaluation for the task for which it was intended.Furthermore, AI models without explainability in robotic systems cannot provide insight into how the results will be utilized to improve the users' mobility [22], [23].Second, most of the previous machine learning studies for the EMG signal analysis presented the performances based on the discrete hand gestures This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/that were already used for training the model, which are not sufficient for relatively dexterous manipulation motions of robotic prostheses.Therefore, it is necessary not only to build a machine learning model that estimates finger joint angles in addition to the hand gestures that the model has not learned before, but also to build an explainable model.
In this study, we propose an explainable AI model that estimates the angles of finger joints by decoding the EMG signals using the attention mechanism [24].In particular, we show that the inner attention mechanism explains that the proposed AI model learns the complex and nonlinear relationship between the EMG signals and the finger joint angles.This study has two major contributions that have not been addressed before to the best of our knowledge: • Models are trained with simple data sets of individual finger movements but can predict more complex data sets of random fingers with a relatively high accuracy.
• The after-trained attention matrix demonstrates that the model can learn the relationship between the EMG signals and the finger joint angles, supporting the reason why the proposed model yields a higher estimation accuracy than the models in previous studies.

A. Experimental Setup
Eight healthy participants (two females and six males, aged 24.75 ± 1.75 years) who consented in advance to the experimental protocol participated in the experiment.The participants had no history of injuries or surgeries and did not feel any abnormalities when moving their fingers, hands, and wrists freely.The experiment was approved by the Institutional Review Board of Seoul National University (IRB No. 2106/004-019) and all the experiments were conducted in accordance with the approved protocol.The participants were asked to move their fingers without moving their wrists.Their forearm EMG signals were recorded using four wireless multi-channel EMG sensors (Trigno Avanti, Delsys, USA), and the movements of the hands and the fingers were recorded with a camera at 240 frames per second (fps) at the same time as references of the finger joint angles.Each participant was asked to perform two different tasks: moving one finger at a time and moving multiple random fingers simultaneously.In the first task (Task 1), each participant flexed and extended all five fingers individually in the order of thumb, index, middle, ring, and little fingers one by one, which is called "th2pi" in this paper.During the experiments, the participants were asked to maintain a constant speed (one second for flexion and one second for extension).The "th2pi" sequence takes 10 seconds and Task 1 consists of 20 sets of "th2pi" sequences, taking a total of 200 seconds.In the second task (Task 2), each participant was allowed to flex and extend one or more fingers at the same finger flexion-extension speed in Task 1.Each consists of five randomized flexion-extension movements and takes 10 seconds.Task 2 also contains 20 sets of random finger motions, taking 200 seconds.An additional experiment was conducted in which the participants were

B. Data Collection
Four sEMG sensors were attached to different locations of the skin of the forearm, identified as the locations of three extrinsic muscles known to contribute to flexion of at least one of the five fingers: the flexor digitorium supericiallis, the flexor pollicis longus, and the flexor digitorium profundus.The flexor pollicis longus muscle is mainly involved with thumb flexion and the other two muscles are with flexion of the remaining four fingers.The EMG signals were obtained at a frequency of 1.26 kHz.The locations of the EMG sensors and the main functions of the three muscles are shown in Fig. 2.

C. Data Processing 1) EMG Signal Processing:
The raw EMG signals were offset to make the mean value to zero, and the absolute values of the modified signals were taken.To eliminate the high frequency noise, the EMG signals were filtered with a second-order Butterworth filter (cutoff frequency: 5 Hz) and normalized to fit into the range between 0 and 1 [25].Finally, the EMG signals sampled at 1.26 kHz were downsampled to 1 kHz.Downsampling and low-pass filtering are common methods of preprocessing of EMG data [26], [27].2) Video Processing: The hand movements of the participants were recorded using a 240 fps mobile phone camera (iPhone 7, Apple).The finger joint angles were estimated using an open source framework (MediaPipe Hands, Google Inc., USA) that can calculate the positions and the motions of the fingers [28] .Then, the angles of each finger were normalized in the range of 0-1.Normalization of each joint angle was performed by subtracting the minimum estimated joint angle from the corresponding joint angle and dividing it by the difference between the maximum estimated joint angle and the minimum estimated joint angle.Fig. 3 shows examples of applying the MediaPipe Hands library to the recorded videos.

III. PROPOSED MODEL
The main consideration factor for a machine learning model in this study is construction of an algorithm that effectively extracts the characteristics of the EMG signals and the joint angle data.The proposed model considers the following three characteristics of the experiment.
1) The EMG signals are time series data (i.e., sequential data).
2) The EMG signals from the four sensors are not independent.3) During Task 1, the EMG signals from a specific sensor are strongly activated when the corresponding finger is flexed or extended.We constructed a machine learning algorithm to reflect the above three features, which is explained in the followings.

A. Time Series Data
To take advantage of sequential data, a gate recurrent unit (GRU) was used as a core unit.A GRU is one type of a recurrent neural network (RNN) which is useful when dealing with sequential data.A GRU is similar to a long short-term memory (LSTM) with a forget gate but has fewer parameters than an LSTM because it does not have an output gate [29], [30].

B. Signal Dependency
The encoder-decoder network structure was employed to utilize the dependency of the EMG signals.The encoder consists of four GRUs, with each GRU receiving a single element of the EMG signals (x t ∈ R 4 ) at time t with the result of the previous GRU as an input value and delivering the output vector and the hidden vector to the next GRU.The hidden vector h i ∈ R 256 is calculated as where The decoder is a stack of several GRUs where each of them estimates an output y t ∈ R 14 at time t.The GRU receives the hidden state of the previous GRU as an input and calculates the output and the next hidden state.The hidden state h t and the output y t are calculated as (2) where W (S) ∈ R 14×256 is a learnable parameter.

C. Sensor-Finger Relations
A encoder-decoder network without attention but with a GRU does not fully reflect the above three features.One of the useful experimental observations is that certain EMG sensors respond significantly more than the other sensors when flexing or extending a specific finger.An attention mechanism was used to implement this observation to the model.The main role of the attention mechanism is to induce the model to learn the process of referring to specific EMG sensor (E MG i , i ∈ {1, 2, 3, 4}) inputs when predicting the related finger joint angles (θ i , i ∈ {1, 2, . . ., 14}), as described in Fig. 4.
The sub-figures show the activated rectified EMG signals from the four EMG sensors, respectively, when executing three sets of Task 1.
The final model is a single encoder connected to five different decoders with five different attention matrices, respectively.The five different decoders predict the joint angles of the five fingers (two joints for the thumb and three joints for the index, the middle, the ring, and the little fingers) in parallel.Each of the five attention matrices is applied to the corresponding decoder and does not share the learning weights with the others.The EMG signals at time t, X (t) and X 4 (t), and each X i (t), i ∈ {1, 2, 3, 4} is one-hot vector that has E MG j (t), j ∈ {1, 2, 3, 4} as a non-zero entries where X 1 (t), X 2 (t), X 3 (t), and X 4 (t) are fed into the encoder sequentially in a given order.Since the EMG signals are generated from the neural system (the cortex and the spinal cord), the order of the four one-hot vectors can be determined by the arrival time of the electrical impulses to the four sensors, which is the same as the proximal-to-distal order of the sensor positions.However, this sequence is not necessary and would not hinder the learning process because the depth of the encoder is not too deep to raise the vanishing problem of the input vectors.Fig. 5 overviews the final model.
Fig. 5 shows the inner structure of the shared encoder (Fig. 5-(a)) and the thumb decoder (Fig. 5-(b)).The shared encoder is composed of four encoder cells that are GRUs, as shown in Fig. 5-(a).The i th GRU takes X i (t) ∈ R 4 and h i−1 en (t) ∈ R 256 as inputs where X i (t) is i th the rectified one-hot vector EMG signal and h i−1 en (t) is the hidden vector from the (i − 1) th GRU.Then it yields o i en (t) ∈ R 256 and h i en (t) as outputs.The 1 st GRU takes the hidden vector h 0 en (t) which is a zero vector and X 1 (t) as an input.As the i th GRU's input is the same (i − 1) th GRU's output, the h i en (t) contains the previous input information X 1 (t), .., X i (t).O en (1) contains only the information of E MG  a compressed 256-sized vector and O en (t) is a matrix that concatenates O i en (t), i ∈ {1, 2, 3, 4} in a column direction.The thumb decoder is composed of two decoder cells, as shown in Fig. 5-(b).The i th decoder cell takes three inputs, where the o i−1 de is the estimation vector of the finger joint angle from the (i − 1) th decoder cell, h i−1 de is the hidden vector from the (i − 1) th decoder cell, and O en is the encoder output.In the case of the thumb decoder, o i de , i ∈ {1, 2} is one-hot vector.θ i (t) is non-zero entries at the i th index of the vector o i de .The decoder cells are all identical for the five different decoders.o i−1 de passes a linear layer (f linear1 , 14 × 256) with a forget dropout ( p = 0.1).This vector (c 1 ) concatenates with h i−1 de .Then the concatenated vector c 2 ∈ R 512 passes a linear layer (f linear2 , 512×4) and a softmax layer.This outcome becomes the attention matrix (R 4×1 ).
The key, the value, and the query of the attention mechanism can be defined as the following: • key: The dot product was used to define the attention score (also known as the Luong attention [31]).The attention score, Att score ∈ R is defined as The attention matrix is calculated by multiplying the key and the query.The matrix multiplication is the crux of the attention mechanism.It means which output O i en the decoder cell focuses on when predicting the finger joint angle, o i de .
Multiplication of Att out and O en gives c 3 , i.e., Then the vector c 3 ∈ R 256 concatenates with the vector c 1 ∈ R 256 and passes a linear layer (f linear3 , 512 × 256) and a rectified linear activation unit (ReLU) layer [32].The output vector c 4 can be represented as The c 4 and the previous hidden vector h i−1 de are fed into the GRU, which yields an output c 5 ∈ R 256 and a hidden vector h i de ∈ R 256 .Then, the vector c 5 passes the linear layer (f linear4 , 256 × 14) and yields o i de ∈ R 14 .c 5 , h i de = GRU(c 4 , h i−1 de ), (8) The training loss is defined as the root-mean-squared error (RMSE) between the estimation angles at time t (o i de ∈ R 14 ) and reference angles at time t (y(t) ∈ R 14 ) where i ∈ {1, 2, . . ., 14}.

RMSE (%)
The L2 regularization term with a weight decaying ι reg = 0.00001.Since the i th entry of o i de is θ i (t), the loss function is expressed as where W is the learnable weight.
IV. RESULT

A. Data Preparation
The EMG data from the four sensors (X) and their corresponding 14 finger joint angles (Y ) were preprocessed to make the input data (X, Y ) to train the model.

B. Model Training
To evaluate the performance of the proposed model, a comparison study was conducted.In this study, we checked the effectiveness of the attention mechanism and the RNN module.Three different neural network models were trained and their best results were compared.The three models are a simple neural network, the proposed model without the attention mechanism, and the proposed model with the attention mechanism.The three models have nearly five million learnable parameters to have the same model complexity, and each model was trained in the same hyperparameter space.The simple neural network is a multilayer perceptron (MLP) with five hidden layers.To check the performance of the RNN that is the deep learning module for time series data, we additionally defined the simple neural network and used it for the comparison study.The size of the hidden layer was fixed to 256 for all three networks.The hyperparameters were teacher-forcing ratios and learning rates.The teacher forcing ratio (γ t f r ) was searched over γ t f r = {0.5, 0.7, 0.9}, and the learning ratio (γ lr ) was searched over  γ lr = {0.0001,0.0003, 0.0005, 0.0007}.The code was implemented with an automated deep learning processor, Pytorch 1.8.Minibatch stochastic gradient descent with a size of 1024 was used for optimizing the loss function.The Adam optimizer was used for the optimization process (β 1 = 0.9, β 2 = 0.999 and = 10 −8 ) [33].
The test accuracy of the three different models for estimating 14 finger joint angles are shown in Fig. 6 and Table I.The error was defined as an RMSE between the estimation y i (t) and the reference θ i (t) values.
The test error of the encoder-decoder network without attention and that with attention are lower than that of the simple neural network except for 1-MCP.Especially, 2-MCP, 3-PIP, and 3-DIP showed relatively large differences in the test errors between the simple network and the network without attention and also between the simple network and the network with attention.Also, the test errors of the network with attention for the thumb angles (1-MCP, 1-IP), the index finger angles (2-MCP, 2-PIP), the ring finger angles (4-MCP, 4-DIP), the little finger angles (5-MCP, 5-PIP, 5-DIP) are lower than that of the network without attention.However, the test errors of the attention model for the middle finger (3-PIP, 3-DIP) are higher than those of the model without attention.

A. Comparison of Prediction Accuracy of Three Different Models
The encoder-decoder networks showed more accurate and stable estimation than the simple neural network x-axis is time and y-axis is the normalized 14 finger angle between 0 and 1.The black lines are the references obtained from the experiment, the red lines are the estimations using a simple neural network, and the green lines are the estimations of the proposed model.
(Fig. 6 and Table I), implying that it is more appropriate to use a GRU, a general basic unit for sequential data processing, for the analysis of EMG data than a method of simple stacking of hidden layers in simple neural networks (Fig. 7).This result also suggests that the GRU-based encoder-decoder model is suitable for complex classification or prediction tasks, by interpreting complex human biosignals, such as EEG or EMG.
This study also shows the ability of generalization of the proposed model.Few studies have implemented machine learning models to estimate finger joint angles when the train data and the test data are completely different.Previous studies on predicting finger joint angles performed both model training and testing within the same motion data of individual fingers [18]- [21].However, all these studies that use machine learning to find the optimal relationship ( f ) between the EMG signals (X) and the finger joint angles (Y ) show a strong nonlinearity, indicating that the linear combination of the muscle signals ( f (x 1 + x 2 )) for the joint angles is not the same as the linear combination of f (x 1 ) and f (x 2 ) [34], [35].In other words, holds where This means that the linear combination of the EMG data and the joint angle data collected during Task 1 cannot fully represent the data from both Task 1 and Task 2. Therefore, the performance of the proposed model validated by the different data sets indicates the higher accuracy in generalization than the previous model.In addition, we calculated the Pearson correlation coefficients between the predicted from the proposed and the estimated angles to quantify the performance of continuous prediction, and the 14 coefficients are shown in Table II.We also conducted a one-way analysis of variance (ANOVA) test between the three different models to check the differences.The 14 p-values from the ANOVA test are less than 0.05, and we can conclude that there are significant differences in the mean values of the predicted angles from the three different models.

B. Explainable Model by Attention
One of the significant implications of this study is interpretation of after-trained attention matrices (Att out ).The result of the attention matrices and our interpretation provide a reason on the high estimation accuracy of the encoder-decoder network with attention than that without attention.Fig. 8 is the color-map of after-trained attention matrix (Att out ) composed of a basic attention matrix (Att p out where p = {1, 2, . . ., 14}) stacked in the column direction.Att p out is Fig. 8. Attention matrix result when moving individual finger (Task 1).This explains why the proposed network yields higher accuracy than previously proposed all machine learning method.
an attention matrix, one of the outcomes of a decoder cell that estimates θ p .The width of Att p out depends on the experiment time.For example, if the subject performed only thumb flexion and extension (the total experiment time is two seconds, and the data are resampled at 10 Hz for visualization), we obtain two attention matrices, Att 1 out ∈ R 4×20 and Att 2 out ∈ R 4×20 .The x-axis of Att p out is time (10 −1 s = 1/ f resample s) and y-axis is the encoder output values (O eni (t), i ∈ {1, 2, 3, 4}) referenced by the decoder cells to predict the joint angle θ p .For fixed time t 0 ( j = t 0 ), each Att p out (i, j = t 0 ) for i ∈ {1, 2, 3, 4} indicates the contribution of O eni (t) when predicting the angle θ p .These values are normalized, i.e., 4 i=1 Att p out (i, j = t 0 ) = 1.Fig. 8 is the after-trained attention matrix (Att out ) for evaluation data (one set of Task 1).Att out consists of a total of 14 Att p out ∈ R 4×100 where p ∈ {1, 2, .., 14}.
The following explains the white areas in the attention matrix and how they are related to the experimental observation.

1) Att 1
out and Att 2 out : They usually show attentions (bright color) on O en4 (t), but for thumb flexion and extension, Att 2 out shows active referencing to O en2 (t) during the time at 0.4 s ∼ 1.05 s and at 1.55 s ∼ 1.75 s, which is the compressed information on E MG 1 (t) and E MG 2 (t).This matches the experimental observation that E MG 2 (t) highly activates when flexing and extending the thumb (Fig. 4).show different patterns especially on O en1 (t).In Fig. 4, E MG1 slightly activates with the middle finger.

5) Att out :
The above interpretations represent that the attention matrix after training learns the relationship between the EMG signals and the finger joint angles.This means that the attention mechanism induces the encoder-decoder network to refer to experimental observations for estimating the finger joint angles with a higher accuracy.Therefore, the attention mechanism contributes to the high estimation accuracy by providing the model with the opportunity to learn the relationship between the finger joint angles and the corresponding muscle signals.This is consistent with a previous study that has reported on the definition of explainable AI (XAI) in the medical field [36].Although prior research has shown a high estimation accuracy in decoding the biosignals (EEG and EMG) using machine learning, they were not able to effectively explain why their learning algorithms yielded the results [37]- [40].

C. Challenges of Real Time Control of Prosthesis
The availability of real time control of an upper-limb robotic prosthesis in a clinical setting is a challenge due to the complex activation of multiple muscles and multiple DOFs of the upper arm movements.The performance of robotic prosthesis with machine learning is related to the time complexity of the device, which is composed of algorithmic complexity, computational time, and hardware costs [41].The algorithmic (time) complexity of our model is composed of the complexity of the attention mechanism, O(nd), and the complexity of GRU, O(nd 2 ), where n is the number of the EMG sensors and d is the hidden size of the attention matrix and the GRU [24], [42].According to prior research, the time complexities of machine learning models that control a robot arm using EMG signals, such as a convolutional neural network (CNN) or a multilayer perceptron (MLP) are O(knd 2 ) and O(n), respectively, where n is the number of the sensors, d is the hidden size, and k is the kernel size [43], [44].We think the time complexity of the proposed model will not be a problem if we select proper n, d and robot hardware.However, the real-time control strategy is still limited due to the tradeoff between the complexity of the mechanical configuration and the complexity of the control systems [45]- [48].

D. Future Work
One of the immediate areas of future work will be implementation of transfer learning [49] or incremental learning [50] for training models using the joint angle data and the forearm EMG data collected at specific arm positions and for predicting the finger joint angles using the EMG signals from different arm postures.Another important area of future work will be experimental validation of the proposed approach using physical systems, including robotic hands [51]- [53] and wearable robots [54]- [57] that require seamless interfaces between the user intentions and actuation.In addition, it would be beneficial to check the practicality of the proposed AI model in real-world applications, such as clinical rehabilitation, where the ability of real-time control is critical.In this way, we can also verify the long-term effect of the proposed method.

VI. CONCLUSION
This study proposed a new machine learning model for estimating finger joint angles based on using raw EMG signals.
The proposed model has a structure of a shared encoder -five parallel decoders with a basic unit of GRU, and an attention mechanism is applied to construct an explainable machine learning model.The encoder consists of four GRU cells, and the decoder consists of two or three decoder cells with the same number of cells as that of estimation angles, and each of the five decoders has an attention matrix.We first predicted complex data (Task 2 -random finger movements) using the model learned with only simple data (Task 1individual finger movements), which demonstrated the ability of generalization of the proposed model.In addition, the attention mechanism applied in this study provides the reason why the proposed model allows for accurate and stable estimation compared to two other models: a simple neural network and an encoder-decoder network without attention.The color map results of the attention matrix after training proved that the learning was proceeded by reflecting the physiological relationship between the forearm EMG signals and the finger angles during model training.

Fig. 1 .
Fig. 1.Experiments composed of two tasks with Task 1 for training the data and Task 2 for testing the data.

Fig. 2 .
Fig. 2. Locations of four EMG sensors and the three muscles on which sensor are placed.All three muscles are involved with finger flexion.

Fig. 3 .
Fig. 3. Examples of finger joint tracking of five different hand motions in Task 1 and Task 2 using a machine learning solution (MediaPipe Hand).The MediaPipe calculates the finger joint position (x, y, z) from a 2D image.

Fig. 4 .
Fig. 4. EMG activation data from Task 1 by showing specific activation of a specific sensor with flexion and extension of a specific finger.(a) Sensor 1, (b) sensor 2, (c) sensor 3, and (d) sensor 4. The dotted box shows the well-marked EMG activation of specific finger movements than compared to the other fingers.

Fig. 5 .
Fig. 5. Overall structure of the proposed network: (a) Encoder-decoder model with an attention matrix.(b) The entire model.
The train and the test data sets are different.When training the model, the data set (X train , Y train ) is only composed of the EMG signals and the finger joint angles obtained from Task 1.When testing the model, the data set (X test , Y test ) is composed of the EMG signals and the finger joint angles obtained from Task 2. The ranges of the angles in the train data and from the test data are the same.

Fig. 6 .
Fig. 6.Error plots in estimating joint angles in Task 2 for three different models."*" shows differences of more than 5%.

Fig. 7 .
Fig. 7. Estimation results of 14 joint angles (Task 2: random finger flexion and extension).x-axis is time and y-axis is the normalized 14 finger angle between 0 and 1.The black lines are the references obtained from the experiment, the red lines are the estimations using a simple neural network, and the green lines are the estimations of the proposed model.

TABLE I ERRORS
IN ESTIMATING ANGLES OF 14 FINGER JOINTS USING THREE DIFFERENT MODELS (TASK 2) For flexion and extension of index finger, Att3out , Att 4 out , and Att 5 out show attentions on O en4 (t) and specially on O en1 (t) that is the compressed information of E MG 1 (t).This matches the experimental observation that E MG 1 (t) highly activates when flexing and extending the index finger.The relatively bright color of Att4out (1, t) (2.3 s ∼ 2.55 s), and Att 5 out (1, t) (2.3 s ∼ 2.6 s) also matchs the experimental result that E MG 1 (t) is more activated with the index finger than with the middle or the ring finger.
out :