A Residual Movement Classification Based User Interface for Control of Assistive Devices by Persons with Complete Tetraplegia

Objective: Complete tetraplegia can deprive a person of hand function. Assistive technologies may improve autonomy but needs for ergonomic interfaces for the user to pilot these devices still persist. Despite the paralysis of their arms, people with tetraplegia may retain residual shoulder movements. In this work we explored these movements as a mean to control assistive devices. Methods: We captured shoulder movement with a single inertial sensor and, by training a support vector machine based classifier, we decode such information into user intent. Results: The setup and training process take only a few minutes and so the classifiers can be user specific. We tested the algorithm with 10 able body and 2 spinal cord injury participants. The average classification accuracy was 80% and 84%, respectively. Conclusion: The proposed algorithm is easy to set up, its operation is fully automated, and achieved results are on par with state-of-the-art systems. Significance: Assistive devices for persons without hand function present limitations in their user interfaces. Our work present a novel method to overcome some of these limitations by classifying user movement and decoding it into user intent, all with simple setup and training and no need for manual tuning. We demonstrate its feasibility with experiments with end users, including persons with complete tetraplegia without hand function.


I. INTRODUCTION
I INDIVIDUALS who suffered a spinal cord injury (SCI) often have severe losses of motor functions. Almost 60% of SCIs result in complete or incomplete tetraplegia [1], affecting both lower and upper limbs. For this population, regaining hand function is ranked as the top priority [2]. Different methods have been studied and applied to restore upper limb function after SCI and other injuries, most notably robotic assistance and functional electrical stimulation (FES). However, the control of either exoskeletons or FES systems by individuals with complete tetraplegia and without hand function is still a challenge. Nevertheless, various alternatives based on voluntary movement detection or voluntary muscle contraction emerged as solutions for human machine interfaces for persons who lack upper limb function, some examples being systems based on mechanomyography (MMG) [3], electrooculographic potential [4], and gaze-tracking [5]. All of these have presented significant challenges yet to be overcome, such as poor artifact rejection and low accuracy [5]- [7].
Voice commands [8] are also an effective alternative as most people with tetraplegia already use such technology for interfacing with home devices, phones and computers. However, it may have long response times and be stigmatizing.
Recently, brain computer interfaces (BCI) research has gained momentum. BCIs are attractive because they aim at restoring motor control in a very similar way as it was before the injury [9]- [11]. Nevertheless, current BCIs still lack the accuracy needed for upper limb control [12], [13].
The most commonly used methods are based on electromyography (EMG). Several strategies have been used to acquire muscle activity signals from different locations to control neuroprostheses and exoskeletons [14]- [18]. However EMG is highly sensitive to sensor positioning. Additionally, each channel can usually control a single degree of freedom and many systems rely on several channels, the complexity. Nevertheless, EMG remains the first choice as input for user intent when controlling upper limb orthoses [14]- [17].
An alternative to inferring information directly from muscle activity is intention decoding from residual movements that individuals with tetraplegia may still be able to perform. This can be done with camera-based systems [19], but it requires a complex motion capture apparatus. A much convenient approach consists in positioning sensors on the user's body, such as position transducers [20] or inertial measurement units (IMU) [21], [22].
Movement analysis with IMUs is less sensitive to sensor placement compared to EMG solutions. Several works have investigated strategies for IMU-based user control of assistive devices. In [21] authors developed a 2D IMU to be used on an active knee prosthesis. They presented a method for controlling its activation based on voluntary movement and mechanical constraints, and tested it in real time in a single able body participant who was able to walk wearing the system. In [23], wireless wrist mounted 3D inertial sensors comprising of accelerometers, gyroscopes and magnetometers were used to identify activities such as standing, walking, jogging and raising the arms. The authors trained a linear discriminant analysis (LDA) classifier based on sliding time domain windows features, and evaluated it on 20 able body participants. They tested several configurations of parameters such as window size, overlapping and subsets of sensors. The best accuracy achieved by the classifier was 96.33%. [24] targeted high performance athletes. The authors proposed an activity classifier based on wearable IMUs data processed with Discrete Wavelet Transform and Random Forest, reaching 98% accuracy at classifying activities such as walking, running, jumping and kicking. Then they used other methods to estimate joint angle and perform fine movement analysis in order to predict injury risk factor, which was their final goal. Experiments were performed on 9 able body participants and 1 with an injury. Although this injury was not described, this participant was able to perform the same activities as the other participants, but with kinematic differences that were analyzed by the developed algorithm. Authors in [25] wanted to evaluate to which extent a system based on IMUs and EMGs could work as a substitute to conventional joystick controllers used in assistive devices. They developed a custom system that captured movement and muscle activity of the head and upper limbs while 10 able body participants controlled a 6 degrees of freedom robotic arm. They found their system to be almost as fast with a 30% time overhead. In addition, they demonstrated improvement when training was allowed.
In [26], [27], an upper limb prosthesis for trans-humeral amputees was equipped with an IMU based controller. A Radial Basis Function Network-based regression was performed to model the joint angles and use the participant residual movements to control the elbow joint. The system was used by an amputee to point at 10 different targets with an average error of 1.5cm, which was an improvement to using the standard EMG based control. In [28], authors built a large dataset comprising recordings from 20 able bodied and 2 amputee participants executing 40 movements. They collected data with EMG and IMU sensors placed on the upper limbs. Their goal was to evaluate if both sensor modalities would present any improvement over EMG alone, as is the standard for amputee prostheses. They trained an LDA system to predict intent from 40 classes offline or 6 classes online. Results were 77.8% classification accuracy in offline tests and 80% completion rate in online experiments with 11 able bodied and 1 amputee participant. They proposed a system with 4 to 6 multimodal sensors for transradial amputee prosthesis control.
The literature focusing on individuals with SCI is scarce when compared to the one related to amputees. IMUs have been used to track trunk angle as input to trigger Functional Electrical Stimulation (FES) for lower limb assistance in Sit to Stand [29] and Sitting-Pivot [30] Transfers in individuals with paraplegia. They both showed upper body kinematics can successfully trigger FES and decrease the load on upper limbs when performing these activities. Authors in [31] developed an LDA based system that predicted user intent based on upper limb movement. They tested it with an adapted rowing platform and a participant with paraplegia. The lower limbs movements were powered by FES. This was activated by the upper limbs rowing motions. They report up to 100% successful state transitions when 300ms delays or advances were tolerated.
However, experiments with individuals with tetraplegia are even more challenging. In [32], advancing the work from [33], authors used residual shoulder movements with IMUs to control a wheelchair. They used Principal Component Analysis for dimensionality reduction and proportional control. The system was tested by 3 users with tetraplegia who were allowed to train before evaluation. After capturing random movements from their shoulders, they searched for those 2 most easily classifiable ones. They then asked the participants to train with only those two movements to control a wheelchair. They found this method to be slower when compared to standard joysticks, but it seemed to improve over time with training.
Our hypothesis is that users with complete tetraplegia and no hand function, but that can still move their shoulders, may be capable of using these residual movements to control neuroprostheses that can restore hand and wrist functions. In previous works [22], [34], we have developed a modified knearest neighbor classifier with dimensionality reduction to decode shoulder movement, captured by a single IMU, into user intent. Participants with complete tetraplegia used it to control a robotic hand during experiments in single sessions. After evaluating the system with 9 participants, we found they could correctly trigger the desired command 91% of times. The system could classify between two different movements which, in conjunction with a finite state machine, could control several actions in the robotic hand. The classifier was able to learn any two movements from a given participant, but it required manual tuning for optimization. This paper presents an intent recognition algorithm using a unique IMU based on residual movements classification for controlling assistive devices by persons with tetraplegia. The goal is to have an interface as a neuroprosthesis controller by triggering its commands, in contrast to systems based on continuous movement control [35]. In comparison to our previous work, the classification algorithm we present here, based on support vector machines, has better results and requires no manual tuning. We have also developed a feature with which users can incrementally refine the dataset from which the classifier in trained, effectively improving its results whenever needed. Our goal was to design an interface for control of assistive devices by persons with complete tetraplegia that is quick to setup and easy to operate. We have evaluated the system with a group of 10 able body participants and another with 2 participants with complete tetraplegia an no hand function caused by SCI. Participants of the second group used the system for a few different sessions, which allowed us to observe the training effect.

II. MATERIALS AND METHODS
Participants were seated and an IMU was placed on one shoulder - Fig. 1. They were asked to perform 3 shoulder movements: forward, upward, and backward. The IMU captured these movements data and streamed it wirelessly to a computer, in which all data processing was performed. The experimental protocol had three phases: calibration, Dataset Refinement and evaluation. The second phase was optional and not always took place. Two groups participated in the study: one formed by able body participants, and one by individuals with complete tetraplegia. The three phases were slightly different in each group, and are further described in each group subsection. Participants were asked to perform movements forward (label 1), upward (label2), and backward (label 3, dashed arrow). Data was streamed wirelessly to a computer.
Kinematic data was collected from the wireless Trigno Avanti IMU sensor (Delsys, USA) at 148hz. The global data flow diagram is shown in Fig. 2. The main system is composed of three major subsystems: data pre-processing, movement detection, and movement classification. Although the sensor provides accelerometer, gyroscope and magnetometer data, the latter was not used in order to avoid dealing with electromagnetic interferences, which might be relevant for a future use in various types of environments. All the three axes from each of the accelerometer and gyroscope form the 6 dimension raw input data.

A. Data pre-processing
The difference between the current and previous data is the band-pass filter with a window-based (hamming window) FIR filter of order 25 and cutoff frequencies of 0.3 and 3Hz. For every new data processed, a window containing the last 1.35 seconds is considered for feature extraction. This is an empirical value found to work well in pilot tests that can be customized for each user. Since the IMUs sample frequency was 148Hz, 1.35s represented 200 data points, and every new window totally overlapped the previous one except by the oldest and newest data points. This process is illustrated in Fig. 3.

B. Movement detection
After data from the calibration phase is collected, the accelerometer resultant is calculated for each trial as the norm of the 3D acceleration, forming a 1-D data vector - Fig. 4. This vector is differentiated, rectified and band-pass filtered. A threshold is calculated based on an empirical threshold factor and the maximum value found in the processed vector, as seen in eq. 1.
where th is the calculated threshold, th f is the threshold factor set to 0.2, and |W p | is the resultant acceleration calculated from the processed window W p as described in eq. 2.
are the x, y and z components of acceleration in the processed window W p . A given window will be considered to contain a movement if the beginning of the window has an absolute value greater than this threshold when analysing the accelerometer processed resultant. Preliminary tests showed that this technique was robust within our experiment context, due to the extremely low level of noise in the raw data. After a movement is detected, to avoid multiple detections of it, the following windows are ignored until the time equivalent to 1.5 window is passed. This process is the same during all the calibration, Dataset Refinement and evaluation phases.
After th is calculated, the calibration dataset is searched for movements. This process is illustrated in Fig. 4. Each window found to contain a movement undergoes the feature extracting steps described in subsection II-A and Fig. 5. Each of these movements is a sample S n , and is labeled according to what movement was requested from the participant in the calibration phase. For consistency, movements forward were labeled as 1, upwards as 2 and backwards as 3 for all participants and trials. The training dataset built in the calibration phase contains 20 samples of each of the three required movements, resulting in 60 samples and 60 labels (1, 2, or 3).
Feature extraction and classifier training: In each sample, three features are calculated: root mean square (RMS) -eq. 3, average value (AVG) -eq. 4, and power spectrum density (PSD) -eq. 7. The latter is used in this work as the mean square of the power spectrum Px, as shown in eq. 6, which is calculated with the Fast Fourier Transform method and a hamming window, as in eq. 5. This is done for each of the 6 axes, resulting in 18 values. A sample is then defined as a vector containing these 18 features extracted from a single movement. See Fig. 5.  . Left: movement detection diagram. The resultant accelerometer data is differentiated, rectified and filtered, forming the processed resultant data vector. Right: the top graph shows the raw data resultant, formed by the accelerometer data. The bottom graph shows the processed resultant data, calculated as in eq. 2. The orange line represents the threshold used for movement detection. The yellow areas are windows in which movements were detected. Green stars indicate the exact times when data values were greater than the threshold and a movement was detected.
where n is the size of the window, x is the data value, X is the raw data window, w is the hamming window, w i represent the hamming window coefficients, and P x is the power spectrum. The classifier is a ν support vector machine (SVM) as implemented by [36], with a linear kernel and an one-vs-all strategy for multiclass classification. After trained, the classifier is saved to a file with the samples used for training and the following parameters: data dimension, filter order, filter cutoff frequencies, wait time after movement detection, threshold, and threshold factor. These parameters are customizable for each participant as needed.

C. Dataset Refinement
After a classifier is trained, it is possible to improve it by performing a Dataset Refinement. During this procedure, the participant is asked to perform more repetitions of the movements already calibrated. The algorithm then repeats the steps described in subsections II-A to build new samples. The new samples are added to the samples pool and the classifier is retrained. The user can choose to keep or forget the oldest sample when adding a new one to the pool. It is also possible to update the samples' pool after every new movement, or only after those not correctly classified. Either way, the execution of new random movements is requested. The default Dataset Refinement procedure goes on until 10 random movements are correctly classified. However, the user can choose to further train one specific movement if its classification accuracy is notably worse than the others. On this work, the following default settings were used: random movements were requested, the samples pool was only updated when a classification was incorrect, and the procedure ended when 10 consecutive correct classifications were achieved. The oldest sample was removed when a new one was added to the pool only when we subjectively believed the participant movements had changed since the calibration. Fig. 6   This can also be done in case of correct classification. Also, optionally, the oldest sample for that movement can be discarded. This process can be repeated as many times as desired. By default, it goes on until 10 consecutive correct classification are reached.

D. Evaluation
After the classifier was trained, the evaluation phase consisted in real time movements classification. A random sequence of at least 20 movements out of the three possible ones was generated. Each movement to be executed was verbally indicated. The online movement detection worked the same way as in the calibration phase. A moving window of 1.35 seconds was considered. This was repeated for every new IMU data. Whenever the beginning of that window had a resultant value greater than the threshold previously calculated for the classifier being used, a movement was detected and classified as described in subsection II-B. At each evaluation session, the ratio between correctly classified movements and the number of requested movements was defined as the performance.
When comparing means, one-way ANOVA was used. Data was normally distributed, which was assessed with Shapiro-Wilk test. Homoscedasticity was evaluated either with Bartlett's or Levene's tests. Correlation was presented as the Pearson product-moment correlation coefficients.

E. Able body group
Ten participants were recruited and distributed between subgroups S and U. All signed the written consent and the protocol was approved by Inria's Operational Committee for the assessment of Legal and Ethical risks (COERLE), protocol number #2020-09, on March 2020. For both subgroups, an IMU was placed on the participant shoulder as in Fig. 1. Then, for the initial calibration procedure, they performed 3 movements 10 times each. The 3 movements were (1) shoulder forward, (2) shoulder upwards and (3) shoulder backwards. Finally, to evaluate the classifier, participants were asked to perform a sequence of 25 movements for the system to classify. In subgroup S, this was repeated once for each shoulder. The goal here was to establish a baseline for the classifier performance without the Dataset Refinement. In subgroup U, participants were asked to perform the evaluation twice with each shoulder, but there was a session of the Dataset Refinement between them. The classifier performance was defined as the ratio between corrected classified movements and the total of 25 movements.

F. Individuals with tetraplegia
Two individuals with complete tetraplegia were recruited. They provided written informed consent before participating in accordance with the Declaration of Helsinki. The protocol was approved by the Ethics Committee (CPP Ouest IV Nantes, France, ID-RCB #2019-A00808-49) and Health Agency (ANSM) on 2020. The study was registered on ClinicalTrials.gov, titled Neural Stimulation for Hand Grasp (AGILIS), registration #NCT04306328. Both participants were AIS A, C4 level of the spinal cord injury and had remaining shoulder movements, but no hand function. Their shoulders' active range of motion was limited, but protraction, elevation and retraction seemed unaffected. Besides, endurance was penalized by muscular fatigue. The experiments took place in a hospital environment over a 28 day period during which participants took part in the experiment once a week.
The calibration phase was similar to the one in the able body group. After an IMU was placed on the shoulder, they were asked to repeat three movements 20 times each. Again, the movements were (1) shoulder forward, (2) shoulder upwards and (3) shoulder backwards.
For the evaluation phase, participants were asked to perform a random sequence of 20 movements. The trial performance was defined the same way as for the able body group, as the ratio between corrected classified movements and the total of movements. The initial calibration was performed on session 1, as well as the first evaluation trial. Then, on each new session, a new trial was performed using the same classifier as the previous session. If the performance was not 100%, a Dataset Refinement was performed before a second trial. If the performance was 100% on the first trial of the session, the protocol was ended for that participant. Each participant had a maximum of 4 sessions. Figure 7 shows a diagram of the protocol for a participant, depending of their group.

A. Able body group
From the 10 participants recruited, 4 (s1 − s4) took part in the subgroup S experiment in which they performed the exercise once with each arm. The other 6 (u1 − u6) took part in the subgroup U experiment in which they performed the exercise twice with each arm. The exceptions are u3 who only did the left side due to discomfort in the right arm, and u6 who only did one trial due to time constraints. Figure 8 shows the performances for the Able Body group. Average performance was 85% for subgroup S and 83% for subgroup U (Fig. 10). There was no statistical difference between the two (p = 0.83). The average performance of the group was 84%.

B. SCI group
Both participants were able to use the system with an average performance of 80% - Fig. 9. Participant SCI-1 achieved 100% performance on the second trial of the second session and the first trial of the third session. His averaged performance was 89%. Participant SCI-2 achieved 100% performance on the last trial, on the fourth session. His averaged performance was 76%. There was no statistical difference in performances between the two participants (Fig. 10, p = 0.27).

C. Both groups
The average performance of all participants in this study, including both Able Body and SCI groups, was 83%. As shown in Figure 11, the classifier with the greatest number of training samples had 62. Still, there is no correlation between the number of samples and performance (correlation = 0.19). Considering the two clusters in Figure  11 and for analysis purposes, we compared classifiers with more and fewer samples than the midpoint between the average number in each cluster (46), and no performance difference was found (p = 0.12). Note that the total number of samples was considered. Therefore, for example, if a classifier had 60 samples, roughly 20 of those were from each movement type. Figure 12 shows boxplots of the test performance separated by movement types. On the Able Body group, the upwards movement resulted in better classifications (p = 0.03). On the SCI group, the backwards movement was marginally worse (p = 0.06). Comparing the average performance before and after the Dataset Refinement, there are significant differences in both groups (Fig. 13, p < 0.01).

IV. DISCUSSION
A total of 38 trials were performed in the experimental protocol. Although there was no difference between able body and SCI groups, there was a significant difference between performances achieved before and after Dataset Refinement. It happened both with the able body group and with participant SCI-2 (Fig. 13). It was not possible to analyze this aspect from participant SCI-1 due to lack of sufficient data, as he performed the procedure only once. Still, a visual inspection of Fig. 9a indicates improvement. After a single session of Dataset Refinement, performance improved, on average, from 75% to 90% in the able body group and from 60% to 90% in participant SCI-2. These results are on par with [22], where only two movements were classified and manual tuning was necessary after the classifier training. This required not only time, but also expertise from the system operator, making it unusable by the participant alone. On the contrary, in the present work all training and operation can be automated, and thus adapted to a clinical or home environment.
Nevertheless, in many cases performances over 90% were achieved even before the Dataset Refinement. When executed, it required no more than 1-2 minutes. Initial training was also not time consuming. From sensor placement until evaluation it took 5-10 minutes, including initial calibration and Dataset Refinement. Fig. 9b shows worse performances when comparing the first trial of a session to the previous session's last trial. This can be due to the difficulty of the participant to control his shoulder. In addition, the exact same sensor positioning could not be guaranteed between sessions. It could affect the classification but its impact is minimized by the data differentiation step. It removes the gravity component of the acceleration. Moreover, the Dataset Refinement procedure can compensate for small positioning changes and the participant's movement alterations. In addition, regarding the inevitable changes in movements performed by participants over time, the Dataset Refinement updates the classifier in a similar way as the adaptive body-machine interface proposed in [37]. Nevertheless, in the future a reliable sensor attachment shall be planned.

A. Data and setup complexity
Many works have achieved similar performance results, but often with significantly more training data or sensors. Small increments in performance may be achieved at high costs. Authors in [38] reported 95% classification accuracy with sample dimensionality of 665, while here we have 18. [28] used 12 inertial and myoelectric sensors to classify 40 classes in offline mode, and 6 classes in real time (one of them being rest). During tests with upper limb amputees controlling their prostheses, they were able to reduce sensor count to 3 and still achieve results of 80% accuracy in real time.
Recently the same group achieved 90% accuracy by choosing 2 sensors out of 12 and applying parameter optimization and confidence-based rejection [39]. Again they used sensors that were both inertial and myoelectric in amputees that had no disabilities. This shows the compromise between number of sensors, number of classes and accuracy.
Authors in [32] used 2 IMUs in each shoulder of participants with tetraplegia to control their wheelchairs. They used a combination of machine learning and proportional control. Participants could successfully control their chairs, but no faster than with regular control. They understood there are two elements of training: one linked to the machine, and one linked to the user. Their training phase took 24 sessions in four months.
Although our results were achieved much quicker -90% in three sessions in the worst case -we can see both effects of training. Fig. 13 shows the effect of training in the machine learning system, and Fig. 9 shows improvements over time.
The Dataset Refinement makes it possible for the user and the machine to improve together. Since participants form the SCI group were not used to perform these shoulder movements, it was unsurprising that over the 28 days they spent in the hospital these movements changed from one session to the next. This was accentuated by the physiotherapy they went through during the period. Also, the exact location of sensors invariably slightly changed between sessions. In addition, they reported that they trained by themselves to try to get the best results.
The large amount of data that machine learning systems require is a known issue some works focus on, particularly the difficult problem of generalization [40], which we did not test in this work. In comparison to other works that used many sensors, our results with only one is satisfactory, as this seems to be a crucial limitation to many systems [41]. In addition, each movement could be trained with as few as 10 samples. This number could be increased either in the calibration phase or during the Dataset Refinement. However, according to data seen in Fig. 11, there is no correlation between number of samples and performance, even when this number was doubled, which was the case particularly within the SCI group.

B. Individualized movements & Threshold Optimization
The system developed in this work is theoretically capable of learning and classifying any movement, given the appropriate parameters. In the presented protocol, all participants were asked to perform the same three movements: shoulder forward, upward and backward. Although there was no significant difference, according to Fig. 12, for the two SCI participants the back movement was harder to classify. They reported that this movement was harder, a claim that was backed by some able body participants as well. The threshold used in the movement detection was automatically generated based on the calibration data and an empirically set parameter which was unchanged throughout the protocol. Figure 14 shows the automatically calculated threshold versus performance. Evaluation performance by threshold for all trials. For analysis purposes, thresholds were separated into two clusters as illustrated by the dashed vertical line. Each box is horizontally located at the median threshold of its cluster. No difference was found between the two clusters (p = 0.62) and correlation was low (correlation = 0.07).
Although no relevant correlation was found (correlation = 0.07), trials with lower thresholds seemed to result in worse performances, despite no significant difference (p = 0.62).
Higher thresholds were calculated when movements were faster. Perhaps this could be linked to better controlled movements, which in turn would indeed facilitate classification. If this is the case, manual tuning of the threshold would not improve classification performance because it is not a relation of cause-effect. On the contrary, it would probably affect it negatively because it would make the movement detection too sensitive or not sensitive enough. Unintended movements would be detected, or entire movements would be missed altogether.

C. Further Comments and Study Limitations
The plurality of solutions for the user intent identification problem is an interesting topic. Differences in sensor placement, type of sensor, pre-processing steps and classification or model estimation approaches all affect accuracy. In fact, by reviewing the literature one can conclude that no one machine learning model is clearly best for movement based user intent identification [42]. Our study has opportunities for improvements such as parameter optimization and confidencebased classification as proposed by [39]. In addition, the SCI group had only two participants. Participant SCI-2 declared that although he enjoyed using the system, it demanded great physical effort. His shoulder movements were not as clean as participant SCI-1's, resulting in him often compensating with trunk movements. This probably affected the system's overall performance, and caused his movements to change from session to session, affecting the results as well.
The experimental protocol with the SCI group happened in the context of a larger study. Neural electrical stimulation electrodes were implanted in the contra-lateral arm of each participant to restore hand functions as described in [43]. Our interface was then used to trigger hand movements. Both participants successfully activated the stimulation using our algorithm. They also had the choice to use other piloting strategies, including EMG or buttons. Participant SCI-1 enjoyed the IMU system, but it was too slow and he would prefer a faster one. Indeed, the stimulator acted at least 1.35s after the movement started as this was the moving window. Other than that window, there was no noticeable delay prior to classification. This participant performed very quick movements and would probably benefit from shorter windows.
Participant SCI-2 would have chosen the IMU as control method. However, he would activate the system by accident when performing tasks with the (contra-lateral) stimulated hand. Since the system always classifies any detected movement (there was no "do nothing" class), it required the participant to keep their shoulder still when an activation was not required. Therefore he would not be able to use it functionally. Finally, this method was designed to trigger preset commands in a neuroprosthesis [43]. Ideally, it would benefit from a complementary system capable of mapping residual body motions into command outputs in a multidimensional continuous control space, such as in [44]. A possible integration of discrete and continuous control strategies has been hypothesized by [35] as a discrete classification system capable of navigating between multiple continuous control maps.

V. CONCLUSION
Assistive devices can dramatically improve the quality of life of persons with tetraplegia. However, those who lack hand function need an adapted interface to operate such devices. Here we presented an interface for assistive devices to be controlled with residual movements from the shoulder. We demonstrated, through experiments with Able Body and SCI participants, that the algorithm is quick to set up and its performance can be adapted furthermore through a Dataset Refinement process. We believe this interface can be easily integrated to upper limb neuroprostheses.