Inertial Sensors for Human Motion Analysis: A Comprehensive Review

Inertial motion analysis is having a growing interest during the last decades due to its advantages over classical optical systems. The technological solution based on inertial measurement units allows the measurement of movements in daily living environments, such as in everyday life, which is key for a realistic assessment and understanding of movements. This is why research in this field is still developing and different approaches are proposed. This presents a systematic review of the different proposals for inertial motion analysis found in the literature. The search strategy has been carried out on eight different platforms, including journal articles and conference proceedings, which are written in English and published until August 2022. The results are analyzed in terms of the publishers, the sensors used, the applications, the monitored units, the algorithms of use, the participants of the studies, and the validation systems employed. In addition, we delve deeply into the machine learning techniques proposed in recent years and in the approaches to reduce the estimation error. In this way, we show an overview of the research carried out in this field, going into more detail in recent years, and providing some research directions for future work.


I. INTRODUCTION
Human motion analysis is an essential support tool for the assessment of the parameters of movements, which is specially important in the evaluation of workout routines, clinical rehabilitation and preventive treatments [1].It is also becoming very popular for the physical activity monitoring in the elderly.Indeed, as the population of developed countries ages, the demand for home-based rehabilitation and the need to obtain quantitative exercise data remotely will increase [2].
Optical methods are considered the gold standard in the motion analysis field because of their accurate measurements of kinematic and spatio-temporal parameters [3].However, these systems entail several disadvantages, such as the high cost of equipment, the need for trained personnel to use the equipment, the required large spaces for installation and their restricted margin of maneuverability, that limits their use to controlled indoor environments.
The inertial motion analysis has emerged as a promising alternative to optical methods attracting a great scientific interest.Inertial systems are portable and can be used everywhere, what means an advantage to the optical systems, which are commonly constraint to a limited space.That makes the Inertial Measurement Units (IMUs) an affordable and friendly use alternative for the estimation of human kinematics.These devices allow continuous monitoring of human motions in daily environments, which is crucial in order to obtain more reliable information than the obtained in sporadic laboratory tests.For these reasons, the use of IMUs has increased in the last few decades for continuous monitoring of human motions, as reported in [4].
Previous works extensively review the use of portable sensors.A recent review analyzes the integration of portable sensors in clothes to obtain physiological and motion information [5].However, inertial sensors are not considered in the analysis, in spite of their frequent use in this field.Reviews that take into account the use of inertial sensors are focused on applications, such as sign languages or motion analysis [3], [6].Works about the inertial motion analysis, as [3], address the motion monitoring and kinematic feature extraction, but only considering the specific area of sportrelated exercises evaluation and their analysis is up to April 2017.A lower-limb focused study is carried out in [7], but it does not provide a complete overview of the literature on inertial motion analysis.For the best of our knowledge, the last in-depth and generic systematic review on inertial sensors for human motion analysis is reported in [4], published in 2016.Since the number of publications about human motion analysis increases over time, as shown in Fig. 1, we consider there is a need to update the literature review on this topic.According to Fig. 1, the number of existing publications on the inertial motion analysis field has considerably increased since the previous review was published [4].
Furthermore, during the last years, Machine Learning (ML) methods have arisen and they have been applied to the inertial motion analysis.Consequently, it is required an update to provide an overview of the algorithms analyzed in [4], as the Kalman filters, complementary filters, integration, vector observation and others, but in combination with the novel ML- For these reasons, the main aim of this work is to review the current state of inertial sensors for human monitoring, especially considering the occurrence and evolution of ML methods for this research field.Another objective of this work is to analyze the current trends and provide insights into inertial motion analysis.To do so, we review the published works on human motion analysis using IMUs and analyze the selected ones in terms of: 1) publisher and years, 2) sensors used, 3) type of estimations referred to the dimensions of the estimated magnitudes, 4) the aimed applications of the proposals, 5) the monitored motion units, 6) the algorithmic approaches, with an in-depth analysis of sensor fusion filters, data science algorithms and the approaches for error reduction, 7) the study participants and 8) the validation systems and metrics.Finally, on the basis of the findings, we suggest future research directions.
The rest of this document is structured as follows: Section II describes the search strategy and the eligibility criteria applied in this work; Section III details and analyzes the findings according to the terms explained above; Section IV discusses the general trends of the studied works and analyzes the future directions; and finally, Section V summarizes the main contributions of this work.

II. MATERIALS AND METHODS
In this section, we describe the workflow to search and select the works in the state-of-the-art included in this review.We describe the paper screening process and analyze the common publishers of this field.Finally, we detail the data extracted from them for the further analysis.

A. Eligibility Criteria
This review focuses on peer-reviewed articles, book chapters and conference papers.Papers are required to be published in English and describe the methodology employed to obtain human kinematic parameters using only IMUs.Sensor fusion with other devices is not considered.This review only includes those papers that validate their results using a reference system.If a journal paper is an extended version of a conference one, only the journal paper is included.

B. Literature Search Strategy
Considering the eligibility criteria, we select eight databases (ACM Digital Library, IEEE Xplore, PubMed, Science Direct, Scopus, Taylor & Francis Online, Web of Science and Wiley Online Library) for the search of related papers, see Table I.Following the strategy of the previous review [4] about this topic, we use the same search command, which consists in: ("human motion" OR "human movement") AND ("wearable sensors" OR "inertial sensors" OR "wearable system"), considering their presence in the title, abstract or keywords.The search includes journals, book chapters or conference proceedings.The paper abstract is required to be available during this search.No restriction was imposed on the date of publications.The initial search on the databases in Table I leads to a review of 2 248 papers.The papers found in this search do not include important references from the state-of-the-art, as [8] or [9], so we expand the search.The new search is only performed in the Scopus website, since it is the largest database of all those evaluated (see Table I).In this case, we use the following command, which is less restrictive than the previous one: (("human motion" OR "human movement" OR "joint kinematics" OR "body tracking" OR kinematic* OR "joint angle*" OR "joint angle velocity" OR "joint angle acceleration") AND (imu OR "inertial sensors" OR "inertial measurement unit" OR accelerometer OR gyroscope OR magnetometer )).
The search criteria is to find these key phrases in the title, abstract or keywords of articles.This second search adds 1 882 documents, so we finally obtain 4 130 papers to review.
Starting from the results of this search, we carry out a Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) screening process [10] to determine the documents included in this study.Fig. 2 depicts the processes of identification and screening to determine the works included in this review.
After excluding duplicated citations, the number of documents to screen is reduced to 3 775 papers.The results of this search include works in the field of human motion analysis with IMUs.However, this IMU-based motion analysis covers a wide range of topics, such as the estimation of kinematic and spatio-temporal parameters or the motion-based evaluation of health, as depicted in Fig. 3.
The topics of kinematic and spatio-temporal parameters refer to the analysis of different motion magnitudes, such as joint angles, trajectory or speed; whereas the human body calibration includes the location of joints or the estimation of segment lengths.The last two topics, human monitoring and motion-based evaluation, are focused on qualitative analysis, such as recognizing types of motions or activities and identifying behavior patterns.Our study is focused on wearable inertial sensors and kinematic parameters as joint rotation angles, so we discard by abstract reading those works that are focused on any other topic.In this way, we exclude 3 262 papers for not being related with the topic of this review.
We found 513 potentially relevant studies of this topic for quality assessment.To consider a study in this review, we set the inclusion criteria detailed in Fig. 2, which are referred to the proposed or applied algorithm, its validation and the sensor system used.Finally, 147 studies meet the inclusion criteria and are analyzed in this review.There is a clear increasing interest in the research field of inertial motion capture (see Fig. 1).This review is not restricted to any date in order to analyze all the works related to this topic and provide a general overview and its evolution.Fig. 1 shows the number of papers published during the 5-year periods from 1991 until August 2022.Only 2 works are dated on the first studied decade, 21 works on the second one and 98 works on the third one.There are also 27 works published in the period 2021-2022, the last years studied in this review.

Most
Since 2016, the last year studied in [4], the number of publications has highly (see Fig. 5).These figures support the need for an update of a systematic review on the research topic of human motion analysis by using IMUs.

D. Data Extraction, Analysis and Examination
We categorize the selected papers in terms of a set of relevant details.We classify them into two groups in order to ease the study and its reading.Firstly, we evaluate the details related with the implemented algorithms, the sensors in use and the estimations.And secondly, we study the specific anatomic part of the human body studied in each work, the validation system and metrics used, and the information related to the validation subjects.
Regarding the first set of details, we analyze the following parameters: the fusion algorithm (FA) implemented for the motion analysis, that indicates with "SF" if the work uses sensor fusion approaches, as "ML" the application of ML techniques and with "OA" any other proposal; the use of biomechanical constraints (BC) and their related requirements of anatomical information (ANT), as the segment lengths or the joint location with respect to the IMU sensors; the implementation of other corrections (OC); the type of sensor used to measure the motions (GS: gyroscope sensor, AS: accelerometer sensor, MS: magnetometer sensor) and the use of external sensors to train ML-based algorithms, but not in the motion prediction (OS); the type of estimation (EST), considering the possible planar (2D) or three-dimensional (3D) estimations; the measured magnitude (ANG: angle, or DIS: displacement referred to the change in the position of the corresponding point, i.e. the sensor or the monitoring joint) and the monitored motion unit (JNT: joint, or SGM: segment).These details are shown in Table III in Appendix A for the selected papers.
With respect to the the human body part, we study the lower-group (LG) or upper-group (UG) of segments and joints.We also report the validation system (VS) used as ground truth in the studied works and the metrics (RMSE: root mean square error; nRMSE: normalized RMSE; %RMSE: percentage of RMSE; MAE: mean absolute error; AE: average error; CFC: correlation coefficient; LAM: limits of agreement; MV: maximum variation; Accuracy; and Error rate), labeled as M1 and M2 in Table IV in Appendix A, employed in the proposed methods.Finally, we provide the number of subjects (NS) studied and if this population considered presents a motor-related disease (DSS).Table IV includes the details of these parameters above explained in the selected papers.

III. REVIEW FINDINGS
Based on the categorization of the papers with respect to their relevant characteristics, presented in Table III and Table IV, in this section we describe the main findings.

A. Sensors
IMUs contain tri-axial gyroscopes, accelerometers and, commonly, magnetometers.The information from these sensors are used separately through the observation of vectors, as gravity in the accelerometer data or the magnetic field in magnetometers, or by integration of the gyroscope data.Another approach is to gather their measurements in different combinations of two or three sensors with different algorithms, as sensor fusion filters or ML methods.In order to illustrate the proportion of their utilization, separately or fused, Fig. 6 shows the percentage of use of each sensor or combination.
The integration of the turn rate alone entails inherent errors.In the estimations of kinematic parameters, the turn rate integration results in an accumulated error from the gyroscope bias.For that reason, only 4.8 % of studies use this sensor alone [11]- [17].
Accelerometers are more frequently used separately (8.8 %).Their measurement of specific force allows us to obtain a direct observation of the gravity vector, used as orientation reference [18]- [30].However, the direct observation of the gravity vector is only possible when accelerometers are static, as in gait strides during the stance phase.
Magnetometers are the most limited sensor analyzed be-Fig.6: Sensor type and combination used in the analyzed works (AS: accelerometer sensor; GS: gyroscope sensor; MS: magnetometer sensor).
cause of their sensitivity to magnetic disturbances in the environment.As a consequence, only a 1.4 % of studies use this sensor independently [31], [32].
Sensor fusion techniques are useful methods to overcome the individual limitations of each of them separately.Most of studies that fuse data from various sensors combine gyroscopes and accelerometers [8], [33]- [46], [46]- [115] or both sensors with magnetometers [116]- [152].Few studies join the accelerometer and magnetometer data [153], [154] and only one uses the gyroscope and magnetometer data [155].
The 3D position of devices can be inferred from the IMU sensors information.The combination of the three sensors, gyroscope, magnetometer and accelerometer includes information of the angular rate of motion and the references of the vector gravity and Earth magnetic field references.However, these 3D positions can also be estimated by sensor fusion techniques using different combinations of the three sensors in IMUs.Most studies give 3D estimations (71.4 %), as shown in Fig. 7. Conversely, only 1.4 % of works use accelerometers and magnetometers and 25.2 % both sensors complemented with gyroscopes (see Fig. 6).This fact is noticeable because the combination of the first two sensors is required to obtain the references to overcome the gyroscope drift and get accurate 3D estimations.It implies that the majority of algorithms that offer 3D space predictions propose methods for error reduction that do not rely on vector references.In this way, the magnetic disturbances that cause errors in the magnetic field measurements are avoided.Fig. 7: Type of estimations in term of their dimensionality, divided between 2D and 3D estimations.
The 2D estimations include kinematic parameters in any plane perpendicular to the floor and the two angles with respect to the horizontal plane, in the frontal and sagittal planes.No doubt the 3D estimations are more complete since they provide information about the whole motion, even if it is mostly performed in one plane.That is the reason why only 27.9 % of studies focus on obtaining estimations in the 2D space.
Only one study adapts the estimation to 3D or 2D spaces, according to the motions [41].In this proposal, the method gives 2D estimations based on the accelerometer data when the motion is mostly performed in one plane.If deviations from this plane are detected, the method integrates the gyroscope data in order to provide 3D estimations.

B. Application
Healthcare applications are the most common ones (95.2 %) in the inertial motion analysis field (see Fig. 8).These applications include motion capture or analysis, gait and clinical assessment, or rehabilitation.The aim of 33.5 % of studies is the motion capture in order to obtain information about human kinematics for the motion analysis or find possible diseases.Gait is the second most common application (19.1 %) due to its relationship with cognitive impairments.The prevalence of use for the specific clinical assessment is similar, being the aim of the 14.5 % of works.Rehabilitation and sports are also worth mentioning because they are very motivating in research works (19.6 % of studies).

C. Monitored Motion Unit
We analyze the anatomical unit measured in the reviewed works.In this work, the anatomical units are called monitored motion units following the nomenclature of previous studies [4].We divide these monitored motion units into two groups: segments and joints.Segments usually correspond to elements of the skeletal system, such as thighs (femur), and are modeled as a rigid-solid bodies.Joints are the unions between segments.The objective of 64.6 % of studies is to measure the motion of joints, whereas the 27.2 % of proposals focus on tracking segments and the remaining 8.2 % combine the monitoring of both monitored motions, segments and joints, as shown in Fig. 9-top.
Studies focus more frequently on the lower-half (61.2 %) of the body than on than upper-half (34.7 %).Compared to the outcomes in the review of Lopez-Nava [4], this trend is in the most recent works different than in the previous ones.We found that recent research, dated on the last three years, extend motion analysis to full-body monitoring, which is an important difference with the findings of previous sudies [4].We consider full body if both upper-and lower-halves are monitored, which is made in the 4.1 % of studies.Fig. 10left depicts the percentage of works that monitor each body half or the full body.
We study the groups of segments or joints included in each body half for a deeper analysis.We define the groups as sets of monitoring units.We consider that studies focus on one of the groups if they estimate the orientation or location of one of the included monitored units.For example, one study that tracks the motion of wrists is included in the hand group, since we Fig. 8: Analysis of the application of the studied works.Left: percentage of proposals whose aimed field is included in healthcare-related applications.Right: specific application or applications considered in proposals.Some works provide possible uses of their proposals, others focus on a specific application, such as gait, and others refer to the general motion capture field.FES refers to functional electrical stimulation and the research of human-robot interactions (HRI) is labeled as "Human-robot".Fig. 9: Monitored motion units and the obtained measurement.Top: percentage of studies that measure each motion unit or their combination.Bottom: type of measurement, orientation and location.establish that the hand group includes the wrist among other motion units.
With respect to the upper-half of the body, we divide it into hand (hand, wrist and fingers), arm segments (arm and forearm), arm joints (shoulder, elbow and forearm twist), trunk (back, trunk and torso) and head and upper back (head/neck/scapula).The upper-half groups (51 works) are named as follows: arm segments (U1), trunk (U2), arm joints and hand (U3), head and upper back, trunk, arm segments and arm joints (U4), arm joints (U5), head and upper back (U6), arm segments and joints (U7), trunk, arm joints and hand (U8), head and upper arm, trunk and arm (U9), head and upper arm and arm joints (U10), head and upper back and trunk (U11), head and upper back and arm segments (U12) and head and upper back alone (U13).Fig. 10-right shows the presence of the combination of these groups in the studied works.
The arm joints group, U5, is the one on which most works are focused (12/51 studies), followed by the trunk (9/51 studies), U2.The next three frequent groups are the arm segments (7/51 studies), U1, the combination of arm and hand joints (7/51 studies), U3, and the head and upper back (6/51 studies), U6.The rest of the groups are only monitored in 1/51 or 2/51 studies, according to the case.In this way, research works commonly focus on the study of the arms more than the other upper-half body structures.
With regard to the lower-half, we divide it into pelvis, leg segments (thigh and shin), leg joints (hip/knee/ankle) and feet.The names of the groups of their combinations are the following: leg segments and feet (L1), leg joints (L2), leg segments (L3), leg joints and feet (L4), feet (L5), pelvis, leg segments and leg joints (L6), leg segments and joints (L7), pelvis and leg joints (L8) and pelvis (L9).Fig. 10-right shows the number of works focused on each of these groups (the total number of works is 90).
In the lower-half body, it is noticeable that most of the works focus on the leg joints, L2, is the object of monitoring of most works (63/90 studies).These joints are commonly studied in multiple applications, being the most important the gait analysis because of its relevance in health assessment.It is worth mentioning the contrast of the number of studies about the L2 group in comparison with the L4 group, that combines the leg joints with the feet, meaning that the motion of feet joints is commonly discarded in the studies focused on the lower-limb joints.Beside the leg joints, the following most studied groups that are also focused on legs are: leg segments, L3 (8/90 studies), leg segments and joints, L7 (7/90 studies) and leg segments and feet, L1 (5/90 studies).The rest of the groups, that include leg joints and feet (L4), feet (L5), pelvis, leg segments and leg joints (L6) and pelvis (L9) are studied in few works (2/90 studies).
The monitored units in these groups are measured regarding their orientation or location.Orientation refers to the rotation angles, which are commonly presented as Euler angles or quaternions.Locations refer to the spatial coordinates, so they are a measurement of distance.Most of studies estimate the orientation of monitored units (81.6 %), the 15.6 % give a combination of orientation and location of units and only the 2.7 % are focused only on providing locations.Fig. 9 shows

D. Adopted Algorithms
The algorithms used in the estimation of the kinematic parameters can be separated into five different groups: integration, vector observation, sensor fusion filters, ML techniques and other methods.
1) Sensor fusion filters: Sensor fusion filters (SF in Table III), including Kalman Filters (KF), particle filters (PF), and Complementary Filters (CF) are the algorithms most frequently used.Specifically, KFs are still the algorithms that are employed the most in the inertial human motion analysis field, following the trend reported in previous studies [4].
The problem formulation of Bayesian filters consists in the identification of the desirable estimations using a series of measurements observed over time containing statistical noise and different inaccuracies [156].The inputs and observations form the knowledge on the system's behavior and both convey errors and uncertainties, namely: the measurement noise and the system errors.These filters fuse the information of sensors with the knowledge of the system in two stages: the estimation stage and the update stage.The initial stage uses the information of the previous time instant to estimate the current state of the state vector.The second stage updates these estimations using the measurements from the sensors.
The motion analysis includes proposals with Extended Kalman filter (EKF), KF and Unscented Kalman filter (UKF), in descending order of frequency of use.KF is a sensor fusion technique that estimates the states of a linear system through the minimization of the variance of the estimation error [156].KFs use a series of measurements observed over time and their statistical noise to produce estimates of unknown variables.EKFs appeared because KFs are limited to linear systems, being their generalization to non-linear systems.EKFs assume that the non-linearities in the dynamic and the observation model are smooth, so they expand the state and observation functions in Taylor series and approximate in this way the next estimate of the state vector.However, this approximation can introduce large errors in the true posterior mean and covariance of the variables, which may lead to the divergence of the filter.One of the possible solutions is the use of UKFs, whose distribution of their state vector is a set of sample points called sigma points.Sigma points capture the actual mean and covariance of the Gaussian random variables and are obtained though the Unscented Transformation (UT).The UT is a method for calculating the statistics of a random variable that suffers a nonlinear transformation.UKFs are an extension of UTs to the recursive estimation where the UT is applied to the augmented state vector.
UKFs appear less frequently in the literature.UKFs are commonly used for the sensor fusion of gyroscopes, accelerometers and magnetometers to estimate the joints location [137], or orientation [130], [149], the orientation of joints and segments [131], and the orientation and location of both elements, joints and segments [121].Some works do not use the magnetometer information and only fuse the gyroscope and accelerometer data to estimate joints orientation [99], [106].
PFs are another modification of KFs for their use in nonlinear systems [156].PFs are close in functioning to UKFs but with a set of differences that approximates PFs to a generalization of UKFs.PFs update the estimations with a randomly generated noise according to the prior knowledge of the process noise Probability Density Function (PDF) instead of the update of the UKF that is deterministic.Another difference with UKFs is that the number of particles in PFs is not related to the length of the state vector.Finally, PFs estimates the PDF of the state instead of the mean and covariance, and it converges to the actual PDF as the number of particles increases.
PFs are less popular than any other kind of KFs.PFs are applied to fuse the measurements from gyroscopes, accelerometers and magnetometers to estimate the orientation of segments and joints [125].PFs are combined with other KFs, such as EKFs, for the fusion of gyroscope and accelerometers measurements to estimate the orientation of segments [94].
CFs combine the information from different sensors by minimizing the mean-square-error instead the error covariance, which is minimized in KFs [157].CFs are used to fuse the measurements of gyroscopes, accelerometers and magnetometers to estimate the orientation of joints [123], [126], [133], their orientation and location [119], [144], and their orientation together with the segments orientation [143].They are also used to estimate the joint either by combining the information of the gyroscope and the accelerometer orientation [45], [47], [62], [65], [101], [109], or just only with the gyroscope [11].
Another alternative to KFs is the Weighted Fourier Linear Combiner (WFLC) filter, which is a model-based adaptive filter.WFLCs exploit the prior knowledge of the signal shape and evolution over time, in those occasions when the motion performed is given [93].These filters are specially effective in periodic signals but adapt to variations between repetitions.Their applications to the human motion analysis include the use of the turn rate measurements to estimate the segment orientation [14] and the combination of these data from the gyroscope with the accelerometer measurements to estimate the orientation of joints [93].
2) Data science algorithms: ML techniques represent the second group of algorithms that is applied most frequently for the estimation of the human kinematics.Furthermore, the supervised learning algorithms are the most widespread in recent years.Supervised learning is one of the most employed learning paradigm which tries to discover the unknown function f (x, ω) that relates the input space X ⊂ R n (which, in this work, are the inertial measurements), with the output space Y ⊂ R (which describes the motion kinematics).Each pair (x i , y i ) is composed of the value of a set of n predictive variables the input space, which are measured by the IMUs, and its corresponding output value y i ∈ R, which are the target value of joint or segment orientation and location.During the process called training, supervised algorithms retrieve the map f ∈ F from the provided training dataset D, typically establishing an optimization problem that minimizes a loss function L. Different parametric function spaces F with different learning methods correspond to the existent variety of supervised methods, as described in depth in Table II.
Gaussian Processes (GPs) are kernel-based probabilistic ML models.The Gaussian process is a kind of continuous random process f (t), such that every finite set of random variables has a multivariate Gaussian distribution [158].GP method estimates the output y by introducing a set of latent variables {f (t k )} n k=1 from a Gaussian process, and explicit link functions, g(•).Gaussian Process Latent Variable (GPLV) models are used with the gyroscope and accelerometer data to estimate the segment positions [58].
Other classical ML methods are Decision Trees (DTs) and Support Vector Machines (SVMs).A DT is a classical ML method that builds a tree, a particular graph without cycles, by branching decision paths for each considered input variable to make the final classification [159].During the training process, databases are used to compute thresholds (the parameters in DTs) that better branch the input variable for optimizing a criterion, usually the best gain of information possible in the current node (optimizing the entropy), for a better prediction of the output variable.
SVMs are one of the most used ML method for classification [160], [161].It establishes an optimization problem to find the so-called support vectors, those training data which are close to the separation hyperplane and maximize the soft margin.Frequently, this method uses the kernel trick which consists in choosing an appropriate non-linear mapping ϕ that maps input samples into a higher dimensional space where they are likely to be linearly separable.In regression problems, the support vectors are used to provide a continuous value trough a link function instead of classes.
However, these classical ML methods are less promising than artificial Neural Networks (ANNs) in the human motion analysis field, as proved in [109] for the correction of the joint angles initially obtained from sensor fusion filters.ANNs consist in a set of connected base units known as artificial neurons which emulate the biological neurons of animal brains [162].ANNs are usually organized in layers which interconnect themselves to create a huge variety of networks that try to represent the functional relation between the input and output variables.ANNs have revolutionized the ML field due to their ability to model very complex non-linear inputoutput relations and their capacity to learn them from a huge amount of data.The single Multi-Layer Perceptrons (MLP) were the first ANNs.
In the inertial motion capture field, ANNs use the accelerometer data as inputs to estimate the segments orientation and location [27], combine the gyroscope and accelerometer data to estimate joint orientations [68], [71], [84], [91] or fuse the information of the three sensors integrated in IMUs to estimate the segment angles [147].Other specific types of ANNs merge the estimation of joint angles with gyroscopes and accelerometers, such as the general regression NNs [49], [90] or the Elman Neural Networks [115].
Deep Neural Networks (DNNs) arise later than ANNs, and encompass a huge amount of modern network architectures with a high number of interconnected layers [163].The current technology allows massive computation during the training process, hence, new variety of interconnections and predictions in real time.DNNs starts with the Convolutional Neural Networks (CNNs), a large sequence of convolutional layers configured in cascade where each layer computes the convolution operation (see [164]) from the previous one.
They are able to extract intrinsic local features, the called deep features, which surpass the results of the classical ML methods.VGG [165] and Residual Networks (RESNET) [166] are famous CNNs included in this category.Most of DNNs including CNNs are feed-forward networks which means that the information flows forward and they do not include cycles.However, DNNs also include recurrent networks which memorize internal states, frequently exploited for temporal sequence, such as the improved Recurrent Neural Network (RNN) [167], which evolved to the novel Long-Short Term-Memory (LSTM) [168], Gate Recurrent Unit (GRU) [169] and nonlinear autoregressive neural network with exogenous inputs (NARX) [170].
Among the deep learning algorithms, LSTMs are the most utilized.LSTMs are made by a sequence of cells capable to keep previous states, specifically keep two kind of temporal information, the long and short-term memory.They have replaced the RNNs which suffer from the vanishing gradient problem during the training and include forget gates to quickly adapt to the new changes of data.These DNNs can use just the information of accelerometers and the orientation of a set of body segments to estimate the whole-body posture [29] or fuse the information of specific force with the turn rate to estimate the joint angles [53], [69], [86], [103].A less common approach includes the fusion of gyroscopes and magnetometers to estimate the joint angles [155].LSTMs can also be used to estimate the orientation of the whole-body joints using the orientation obtained with sparse commercial sensors [148].In [69], LSTMs are combined with CNN to estimate the joint angles.CNNs are also used to obtain the joint angles only using the accelerometer data [21] or fusing the gyroscope and accelerometer data [56], [111], [114].In [34], Mundt et al. made a comparison of these previous methods, CNNs and LSTMs, together with MLPs for the estimation of joint orientation.Using the information of gyroscopes and accelerometers, CNNs provided the most favorable metrics.Other RNNs are also used to estimate the joints orientation.To estimate the joint angles from gyroscopes and accelerometers, [54] proposes a NARX; and [134] also includes the magnetometer data with NARX and LSTMs.
The use of optical systems also allows the data generation in order to have data for the training and test of the algorithms and to increase the available data to train the models.These tasks can be performed with simulation software, e.g.Open-Sim [171], as in [86], [103], [148], by applying kinematic relationships from the stereophotogrammetric measurements, as in [19], [53], [56], [68], [69], [91], [103], or with data augmentation techniques [68] 3) Other algorithms: Over the years of research on motion analysis with inertial sensors, proposals have been based on various algorithms other than sensory fusion filters and data science methods.These proposals cover from the integration of the gyroscope data to estimate the joint orientation [12], [15]- [17], [41], to its combination with the direct use of the data from accelerometers to estimate the orientation of joints [80], [104], [105] and segments [59], [60], [63], [81], and to estimate the orientation and location of segments [110], [112].The measurements from gyroscopes and accelerometers are also used directly to obtain the orientation and location of joints [35] and segments [46], and to estimate the orientation of both joints and segments [48], [51], [113].The information of the three sensors in the IMU are also directly used for the estimation of the segments orientation [141].
Different works exploit the observation of the gravity vector by the accelerometer for the estimation of the orientation of joints [18], [22], [24], [30], [61], segments [28] or both [20].Other works also use the data of the magnetometer to estimate thee joint orientation and location [31], or combine this information with the measurements of accelerometers to obtain the joint orientation [153].The gravity vector can also be observed by eliminating of the linear acceleration of the motions, which can be estimated from the turn rate measurements [73].
Besides the gyroscope integration and the direct observation of vectors, the measurements from IMU sensors can be combined through the use of virtual sensors.The use of virtual sensors consists in the estimation of the measurements that a sensor would obtain if it was located in the joint.This measurement projection is commonly performed because it is not possible to place the sensors in the joints.This is commonly used to simulate the measurements in joints whereas the IMUs are placed in segments.This approach is used to combine the gyroscope and accelerometer measurements to estimate only the joint orientation [37], [46], [83], or both the joint orientation and location [89].Another application of virtual sensors is to combine the turn rate, specific force and magnetic field to obtain these magnitudes in joints to estimate their orientation [129], [132], [150], [151] and also not considering the measurements from gyroscope for the joint orientation estimation [154].Another common method used is the gradient descent.Gradient descent is applied to obtain the joint orientation by using the measurements from gyroscopes [13] and combined with the specific force measurements [39], [64], [102].This approach also allows the gathering of the measurements of the three sensors to estimate the segment orientation [116].
The remaining proposals use a wide variety of methods and approaches to monitor the measurement units.These methods include probabilistic graphical models [122], which are employed with the gyroscope, accelerometer and magnetometer measurements to estimate the orientation of joints, smoothing algorithms [36], bidirectional low-pass filters [19], least squares [75], [87], optimization techniques [25], [67], [107] and modified iterative algorithms [23].The latest worth mentioning approach consists in the double-sensor difference based algorithm [26], that combines the measurements from two accelerometers placed on the same segment with the knowledge of their positions with respect to the joint.
4) Approaches for error reduction: This section describes the main approaches found in the literature in order to reduce the errors in the estimation of kinematic parameters.First, we focus on the explanation of the proposals based on biomechanical constraints and then we summarize the approaches for error reduction based on the properties of the inertial sensors and their motions.
Biomechanical constraints are a promising resource to improve the inertial human motion analysis.A common approach is to model the rotations of different joints with different Degrees Of Freedom (DOF), which are depicted with cylinders in the geometrical model in Fig. 12.The three rotational DOF recorded by IMUs can be modeled as one or two DOF joints, according to the possible anatomical motions.
Another approach related with the simplification of motions to a lower amount of DOF is to model the motions as if they occur in one or two planes.This approach reduces the 3D space in, at least, one dimension.Different motions, such as gait or squats, can be approximated as 2D in the sagittal plane [105], [109], [134] or with the combination of the sagittal and coronal planes [52], [97].Fig. 12: Scheme of the biomechanical constraints commonly implemented in the kinematic models for the inertial motion analysis.It includes the geometrical model with a reduced number of DOF in the knee and ankle joints and the limitations with respect to the ROM, using the knee as example.The lengths and IMU-joint vector used in the soft constraints are labeled as d S , d T and r i with i = 0, 1, 2, respectively.
The separation of motions in the DOF available for the joints allows another restriction based on the joint anatomical ROM.This constraint is based on the correction of the estimations that are not consistent with the anatomically possible ROM per DOF of joints.As depicted in Fig. 12, the ROM of a joint, in this case the knee, includes the consistent estimations and the estimations on the limit of the ROM.The values of angles outside this range are wrong estimations of the algorithm and the objective is to detect and correct them.This approach can be found in several proposals in the literature [59], [79], [87], [120], [136], [143].
An alternative approach is to relate the linear acceleration suffered by the IMUs with the linear acceleration in the common joint between segments.This approach requires the consideration of the gravity influence from the specific force measured by accelerometers.If the gravity influence is eliminated, Eq. ( 2) can be applied with the derivation of the turn rate.
The IMU-joint vectors combined with the segment lengths or the joint-joint vectors are commonly used to estimate the kinematic of chains of segments.This is frequently performed with the Denavit-Hartenberg (D-H) notation, that uses four angular and distance parameters to relate reference frames with the links of spatial kinematic chains [172].In order to apply the D-H convention, one reference frame is defined for each DOF included in the biomechanical model.The axes of the consecutive reference frames, i − 1 and i, must follow two rules: the x i axis must be perpendicular to z i−1 and the x i axis must intersect with z i−1 .In this way, the transformation matrix T i−1,i detailed in Eq. ( 3) defines the transformation between consecutive frames.
Where θ i is the angle between x i−1 and x i axis, about the z i−1 axis and β i is the angle between z i−1 and z i axis, about the x i axis.This transformation of consecutive frames allows the estimation of the forward kinematics of a chain of joints by using Eq.(1) and Eq. ( 2), as performed in [74], [79], [87], [121], [122].
The location of joints and the segment lengths are not imposed as the limitation of DOF or ROM, which directly models or corrects the estimations.However, they are used to impose constraints in the measured magnitudes.For that reason, the restrictions forced through these conditions are commonly known as soft constraints.It is worth mentioning that the errors in the estimation of the IMU-joint vector directly influence the estimation of joint angles that use these soft constraints [87].

E. Participants of the study
This work also analyzes the number of subjects that participate in the studies to validate the methods.Fig. 13top shows the boxplot of the distribution of subjects in the studies analyzed in this work (147).The boxplot presents the 1 st , 2 nd , 3 rd and 4 th quartile of the studied subjects together with the outliers.In this work, the outliers represent the punctual studies that test their proposals in more than 18 subjects (8/147 studies).According to Fig. 13-top, most results provided in the studies correspond to a population of less than 10 subjects.It is worth mentioning than the median is 3 subjects per study, what makes the results hardly generalizable to all population.Furthermore, more than one third of studies test their proposals with only one person (34.7 %).The studies that validate their proposal with the highest amount of volunteers commonly test ML-based algorithms.This amount of volunteers is required by these algorithms because they work with a high amount of data in order to develop generalizable models.However, a 65.4 % of these studies use the data from the optical systems to generate simulated inertial data, as shown in Fig. 11.
Studies analyze volunteers with or without diseases related with the motor system.We assume that if there is no statement about whether unhealthy people is included, the studied population is healthy or with no illness that affects the performance of motions.In this way, only a small percentage (8.2%) of proposals are tested on population with these motor limitations (see Fig. 13-bottom).This is remarkable since most proposals claim healthcare applications among their possible uses as seen in section III-B (95.2 %).

F. Validation Systems and Evaluation Metrics
For the validation of the reviewed works, researchers use different systems, as shown in Fig. 14.The gold standard is the 3D optical motion capture system, such as the commercial Vicon [175] or Optitrack [176], being the most widely employed.This system is commonly used for the validation of proposals (68.0 %) and, sometimes (4.8 %), in combination of force platforms or with a simulation software (0.7 %).2D optical systems that can obtain a reference in the image plane are also used (4.8 % of the works).In some works, the 2D optical systems are combined with depth sensors (0.7 %).Another approach to validate the algorithms that is worth mentioning is the use of output values of commercial inertial systems that provide highly accurate measurements, as done in the 6.8 % of studies.IMUs of the Xsens commercial brand are the most frequently used for the validation of proposals [177].Four of the eleven works that validate their algorithms against inertial sensors outputs use these sensors [109], [110], [147], [148], whereas the remaining seven works use IMUs of seven different brands.
A less common solution includes the use of analog and electronic goniometers (8.2 %).Other validation systems, which in combination sum a 6.8 % of works, include different programs of motion simulation, encoders and potentiometers.
The accuracy metric reported most frequently for the validation of proposals is the root mean square error (RMSE).In some works, the correlation coefficient or the mean absolute error (MAE) are also provided.For the case of angles measurement, the studied works report a RMSE between 2.59 °and 7.67 °.Even if in average most of studies provide similar metrics, it is worth mentioning that the RMSE range of ML methods is between 2.48 °and 5.70 °, whereas the RMSE provided by the classical methods is between 2.24 °and 7.80 °.These results prove that ML methods are promising approaches in the human motion analysis field in spite of their limitations related to the data availability.

IV. DISCUSSION
After the previous in-depth study, we discuss the review findings in terms of the general trends and the future guidelines in the inertial motion monitoring field.

A. General Trends
This paper analyzes the current state and the research trends in the inertial motion analysis field.The analyzed works show the interest in developing an alternative to the gold standard system, based on cameras, due to their high cost and the required space of use.As a consequence, the research focused on developing an IMU-based system for the human motion analysis has increased over the last years, as seen in Fig. 5.
The sensors integrated in IMUs, accelerometer, gyroscope and magnetometer, are fused in different ways in the analyzed works, as shown in Fig. 6.The fusion of gyroscopes and accelerometers is more common than the use of magnetometers, being one of the main differences with respect to the findings in previous reviews [4].However, the use of magnetometers is spreading during the last year with 10 works.Bayesian filters and trigonometric approaches are the ones that most frequently employ the data from the magnetometer.ML proposals barely rely on the measurements of the magnetic field and only use them in the training step of the algorithms.In this way, the works focused on ML proposals are not limited by magnetic disturbances.
The common objective in 72.4 % of studies (see Fig. 7) is to obtain 3D kinematic parameters.The 2D estimations are useful in human motion analysis because some movements can be simplified as motions in one plane, e.g.knee or elbow flex extension or even gait and squats.However, these estimations can miss relevant information in motions, about correctness or symptoms of motion-related diseases.Obtaining the complete kinematic information is specially important in healthcare applications, that is considered in the 95.2 % of works (see Fig. 8).
Most of the analyzed works (33.5 %, as shown in Fig. 8) mention the generic motion capture field for human motion analysis, closely followed by the gait evaluation, as aimed applications for their work.That reflects the interest of developing more affordable and user-friendly alternatives to the optical systems, as previously discussed.Consequently, the analyzed works propose algorithms to monitor frequently the orientation of joints, which is commonly measured by stereophotogrammetric systems, such as Vicon [175].The 64.6 % of works focus on joints and the 81.6 % on the estimation of the orientation.These percentages imply a great advance in the direction of inertial solutions for the human motion analysis, especially compared to the trends reported in [4], where most works studied the orientation of segments.
Another interesting analysis is focused on the distribution of works with respect to the analyzed body part, divided into the upper-and lower-halves.The 61.2 % of works (see Fig. 10) study the lower-half of the body, and most of them focus on the leg joints, which is also consistent with the trend of gait evaluation besides the motion analysis.Conversely, only the 34.7 % (see Fig. 10) studies the upper-half, that include arms and trunk, which are difficult to monitor due to the DOF and the complexity of joints as shoulders or neck.The results in Fig. 10 mean that during the last years, the research has been focused mostly on the lower-half of the body, the opposite of what happened in [4], that most of the reviewed works analyzed upper-limbs.However, monitoring this upperhalf of the body is crucial for the evaluation of motions, being specially important in the rehabilitation of cognitive alterations or illness, such as strokes.The remaining 4.1 % (see Fig. 10) of proposals are aimed at monitoring the wholebody posture, which are the most complete approach for the human monitoring.Even though the gait analysis is commonly performed by monitoring the lower-limbs, upper-limbs are also important to study relevant features, as balance, in clinical assessments.The rising interest in considering the full body is also another noteworthy difference compared to previous findings.
For the full-body monitoring, ML methods are specially attractive.Sensor fusion algorithms use one IMU per segment to monitor the whole-body posture or model biomechanical relationships between segments to reduce this number with different constraints.Conversely, the approach in ML-based proposals focused on the whole-body posture is the optimization of the number of devices with the use of the socalled sparse IMUs, as in [29], [58], [147], [148], [155].This approach is also used to monitor specific limbs, as legs, reducing the number of sensors [21], [100], [134].
The biomechanical approaches for error reduction can restrict motions and might not be generalized for populations with motor related diseases.For instance, the ROM of joints can be different in people with anomalous physical abilities.Likewise, the assumption of a number of DOF can miss relevant information about motions out of the main directions.Also, the knowledge of the segment length or the location of the sensors on the body is not always available in practical applications.Different IMU-joint calibration methods have been proposed to address this limitation.The first approach is to obtain an average location of joints with respect to the sensors, which has been validated for the upper-and lower-limbs [178], [179], respectively, but require specific calibration motions.The second method consists in estimating an adaptive position vector, considering the changes of the location of IMUs due to soft tissue artifacts [180]- [183], which has been validated for the calibration of hips performing leg circles [182].These proposals assume that the joints are fixed, but it is not the case in all the activities in the daily living.In [184], the calibration of moving joints with soft tissue artifacts is addressed.
As in previous findings [4], the sensor fusion algorithms are employed more commonly than other approaches.However, their use during the last decade remains stable (around 24 papers), whereas the use of ML techniques has increased from 4 papers to 18 papers.These ML techniques provide a slight improvement in the accuracy metrics, referred to a re-duction of the maximum RMSE in 2 °in angle measurements.However, none study analyzed in this work includes a research that makes a fair comparison using common data to test both approaches.
Sensor fusion filters and data science algorithms differ in terms of their computational costs.Computational time varies among the different methods depending on their implementations.Sensor fusion filters are faster than data science methods which require more calculations, specially, DNN-based ones whose number of parameters is superior.Also, ML-and DNNbased methods usually demand more memory than the sensor fusion solutions, specially in their training stage, making their implementations more expensive.
ML algorithms are more robust to variation in the intrinsic noise of the sensors with which they are trained.In addition, their robustness can increase by generating synthetic data to which more noise models are added.Conversely, the sensor fusion algorithms include parameter tuning to adapt them to the sensors used, e.g. the covariance matrix of KFs.Thus, sensor fusion algorithms would require a previous study to estimate these sensor-dependent matrices.
ML methods require a high amount of reference data to be trained.Two alternative trends are followed in order to generate reference data: 1) to simulate the inertial data from the optical data to use them as inputs or 2) to use the orientation data obtained by commercial systems as reference.In the first case, the simulation of inertial data might not present the intrinsic errors of IMUs, whereas by using inertial data as reference, it presents an error around 0.5 °depending on the commercial brand, which is less accurate than optical systems.
With regard to the validation, new reference systems have appeared during the last years.Among them, we find 2D visual systems, encoders and computational models.The 3D optical systems are still the ones most frequently used (68.0 % of studies, see Fig. 14).However, the use of this validation system entails the limitation of testing the proposals in daily activities and alternative validation methods should be investigated [7].
The reviewed studies generally analyze a low amount of participants for the validation of the algorithms.This limitation was detected in [4] and still remains in recent works.Most studies test their results only in one volunteer, and the average of study subjects is 4 participants.It makes the proposals hardly to generalize for the whole population.In those studies that more participants are involved, the inertial data are simulated from the optical data or their reference consists in the orientation outputs obtained from the IMUs, including the errors previously indicated.
Most studies analyze healthy participants.That is noticeable since most studies consider healthcare applications as possible uses of their proposals.However, only a few of them (8.2 %) test their proposals on subjects with motor related diseases.

B. Future Advancements and Developments
This review highlights a set of clear trends.The studies describe the motions in the 3D space more frequently than reducing them to planar motions.This is crucial to describe complex motions that can be performed during the daily life, so it is required for an out-of-the-lab analysis.Furthermore, the reduction of the gait or simpler motions such as knee flex-extension to a plane eliminates relevant information about these motions.
The reviewed works focus on the lower-limbs, specifically on the orientation of the hip, knee and ankle.Future research should include upper-limbs or even focus on the development of whole-body posture monitoring for a complete description of motions.In this line of work, the proposal of sparse-IMUs utilization, is promising to decrease the number of sensors in use, which is required for the motion analysis in all environments.Moreover, the monitoring of complex joints, such as shoulders or hips, which are usually modeled as 3-DOF joints, should include all their DOF for a proper kinematic analysis.
With regard to the algorithms in use, the current trend moves from the Bayesian filters, which we consider the classical ones, to ML algorithms, specially into deep learning algorithms.For the development of these novel proposals, more data with an accurate reference are required, as described in [185], [186], in order to avoid the use of data from IMUs and simulations from the optical systems as ground truth.One of the main limitations of the biomechanical constraints found in the literature is their generalization of use in wide and varied populations, where the constraints based on ROM and DOF exclude people with motor diseases.In this way, new proposals should be adaptable to the populations under study.Also, alternatives to obtain the IMU-joint vector based on inertial devices are needed in order to make suitable the proposals that exploit the biomechanical relationships in out-of-the-lab environments.
Common data with inertial measurements and its reference are needed in order to obtain a fair comparison of the existent and new proposals.In that line of research, future proposals are required to be validated on a larger number of volunteers than is currently the case.This also should ensure the variability of motions and not be focused on the gait.

V. CONCLUSIONS
This work has reviewed the studies focused on the human motion analysis based on IMUs.The date of publication of the reviewed papers is not limited, so we provide an overview of the proposals since the first study to the current date.This overview summarizes the algorithms, the combination of sensors, the anatomical units monitored, the subjects of study and the validation approaches in the research of inertial monitoring.The review also focuses on the studies of the last decade, so we analyze the last trends in this research field.Most of the analyzed works focus on obtaining the 3D estimation of the kinematics of lower-limb joints, presenting a lack of studies of the upper-half of the body.The Bayesian filters are still the most used methods, but their trend is to be applied less frequently, whereas the ML algorithms are being used now with a higher incidence.This review includes the description of the main algorithms used with their inputs and outputs for a better understanding of the existent methods.In this way, we show that, nowadays, these groups of algorithms present also differences in the selected sensors: Bayesian filters tend to use more the magnetometer and try to compensate its limitations, but ML algorithms commonly rely only on gyroscopes and accelerometers.Both groups of algorithms present also differences in the range of accuracy, obtaining slightly lower maximum errors by using ML methods.This work also analyzes the proposed approaches for error reduction, highlighting the need of proposals suitable for all the population and of IMU-joint calibration methods.Finally, this work remarks the requirement of testing future proposals on a highly number of subjects, that help to create common databases that allow the comparison among the existent and new proposals.IV: Relevant details about the monitored anatomic unit, the number of subjects of study in the selected studies the validation system and the metrics for evaluation.
LG: lower-limb group, UG: upper-limb group, NS: number of subjects for the analysis, DSS: presence of subjects with diseases, VS: validation sensory system, M1 and M2: metrics provided for the validation of the proposals.The head mouse -Head gaze estimation 'In-the-Wild' with low-cost inertial sensors for BMI use [146] Head/neck/scapula 1 IMU

RMSE
The manumeter: A wearable device for monitoring daily use of the wrist and fingers [32] Hand/wrist/fingers 7 Goniometer MAE The online estimation of the joint angle based on the gravity acceleration using the accelerometer and gyroscope in the wireless networks [102] Hip/knee/ankle

Fig. 1 :
Fig. 1: Number of publications focused on the inertial motion analysis, referred to obtaining kinematic parameters by using portable inertial sensors, found in the literature.
reviewed works are journal papers (72.1 %) (see Fig. 4-top).These works are published in 42 journals.The 56.7 % of them appear in 7 journals (each of them with at least four papers), as shown in Fig. 4-bottom.The journals that appear with the highest frequency in this search are Sensors, IEEE Sensors Journal (IEEE SJ), IEEE Transactions on Biomedical Engineering (IEEE TBE), Journal of Biomechanics (JBiomech), Gait & Posture (G&P), IEEE Transactions on Instrumentation and Measurement (IEEE TIM) and IEEE Journal of Biomedical and Health Informatics (IEEE JBHI).The remaining 43.3 % of works are distributed in 35 journals.

Fig. 3 :
Fig. 3: Distribution of the publications related to human motion analysis and IMUs, sorted by the semantic information obtained.This work focuses on the topic included in the left square of the lower row: kinematic parameters with magnetoinertial measurements.

Fig. 4 :
Fig. 4: Distribution of the papers with respect to the type of publication environment in which they were published.Top: conference and journals distribution.Bottom: journals that published the analyzed works.

Fig. 5 :
Fig. 5: Year of publication of the reviewed papers.Top: trend since the first motion analysis related work until nowadays.Bottom: distribution of publications in last 5-year period.

Fig. 10 :
Fig. 10: Anatomical monitored units.Left: location of the monitored units in the upper-or lower-half part of the body.Right: number of papers per combination of segments and/or joints monitored.

Fig. 11 :
Fig. 11: External sensors used to obtain the reference measurements for the training and test stages of the ML algorithms.

Fig. 13 :
Fig. 13: Characteristics of the population of study in the human motion analysis literature.Top: percentage of works that evaluates their proposal in each number of participants.Bottom: percentage of works that considers population with (Diseased vols.) or without (Healty vols.)disease.

Fig. 14 :
Fig. 14: Validation system used to assess the proposals.Force platforms and simulations are abbreviated as force plat.and simul., respectively.

AA✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓
APPENDIX A TABLES OF THE DATA EXTRACTEDTABLE III: Relevant details related with the implemented algorithms, the sensors in use and the estimations of the selected studies.With respect to the FA acronyms, SF: sensor fusion, ML: machine learning, OA: other algorithms.Other acronyms: BC: biomechanical constraints, ANT: anatomical parameters in use, OC: other constraints, ML learns: refers to that theses ML-based algorithms learn for real motions so they are constraint to anatomical joint limits and EST: refers to the use of anatomical or soft constraints after estimate the required parameters, GS: gyroscope sensor, AS: accelerometer sensor, MS: magnetometer sensor, OS: other sensors for training, EST: type of estimation, ANG: angle, DIS: displacement or position, JNT: joint and SGM: segment.AS MS OS ESTANG DIS JNT SGMA basic study on variable-gain Kalman filter based on angle error calculated from acceleration signals for lower limb angle measurement with inertial sensors[33] Comparison of Three Neural Network Approaches for Estimating Joint Angles and Moments from Inertial Measurement Units[34] quaternion-based orientation optimizer via virtual rotation for human motion tracking[116] State Robust Extended Kalman Filter for Orientation Tracking during Long-Duration Dynamic Tasks Using Magnetic and Inertial Measurement Units[117] to accurate measurement of uniaxial joint angles based on a combination of accelerometers and gyroscopes[37] quaternion-based kalman filter for human body motion tracking using the second estimator of the optimal quaternion algorithm and the joint angle constraint method with inertial and magnetic sensors[120] -based estimator for functional electrical stimulation: Preliminary results from lower-leg extension experiments [degrees of freedom model for upper limb kinematic reconstruction based on wearable sensors[121] AS MS OS ESTANG DIS JNT SGMA novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models[122] filter for tracking hip angles during cycling using wireless inertial sensors and dynamic acceleration estimation [glove for fingers motion capture using inertial and magnetic measurement units[124] for estimating knee angle using two legmounted gyroscopes for continuous monitoring with mobile health devices sensor-based assessment of lower limb spasticity in children with cerebral palsy[41] of measurement of joint angles and stride length with wireless inertial sensors for wearable gait evaluation system [of gait analysis with a smartphone for measurement of hip joint angle[12] Wearable Magnetometer-Free Motion Capture System: Innovative Solutions for Real-World Applications[44] Accuracy of a custom physical activity and knee angle measurement sensor system for patients with neuromuscular disorders and gait abnormalities[45] AS MS OS EST ANG DIS JNT SGM Ambulatory estimation of knee-joint kinematics in anatomical coordinate system using accelerometers and magnetometers[129] An adaptive complementary filter for inertial sensor based data fusion to track upper body motion[47] An auto-calibrating knee flexion-extension axis estimator using principal component analysis with inertial sensors[130] An inertial sensor system for measurements of tibia angle with applications to knee valgus/varus detection[48] An instance-based algorithm with auxiliary similarity information for the estimation of gait kinematics from wearable sensors [An investigation into the accuracy of calculating upper body joint angles using MARG sensors[131] An optimized Kalman filter for the estimate of trunk orientation from inertial sensors data during treadmill walking[50] Analysis of a mobile system to register the kinematic parameters in ankle, knee, and hip based in inertial sensors[51] Angle measurements during 2D and 3D movements of a rigid body model of lower limb: Comparison between integral-based and quaternion-based methods[52] Artificial neural networks in motion analysis-applications of unsupervised and heuristic feature selection techniques[53] kinematics by single-axis accelerometers: From inverted pendulum to N-Link chain[19] Comparison of angle measurements between integral-based and quaternion-based methods using inertial sensors for gait evaluation [: A CNN-RNN Based Deep SuperLearnerFor Estimating Lower Extremity Sagittal Plane Joint Kinematics Using Shoe-Mounted IMU Sensors In Daily Living[111] Deriving kinematic quantities from accelerometer readings for assessment of functional upper limb motions[20] Design and validation of an ambulatory inertial system for 3-D measurements of low back movements[59] Drift-Free and Self-Aligned IMU-Based Human Gait Tracking System with Augmented Precision and Robustness[62] Effect of walking variations on complementary filter based inertial data fusion for ankle angle measurement[65] Estimation of gait kinematics and kinetics from inertial sensor data using optimal control of musculoskeletal models[67] Estimation of Gait Mechanics Based on Simulated and Measured IMU Data Using an Artificial Neural Network[68] Estimation of kinematics from inertial measurement units using a combined deep learning and optimization framework[69] Estimation of knee joint angle during gait cycle using inertial measurement unit sensors: a method of sensor-to-clinical bone calibration on the lower limb skeletal model[113] Estimation of the continuous walking angle of knee and ankle (Talocrural joint, subtalar joint) of a lower-limb exoskeleton robot using a neural network[71] Estimation of the knee flexion-extension angle during dynamic sport motions using body-worn inertial sensors[72] Evaluation of wearable gyroscope and accelerometer sensor (PocketIMU2) during walking and sit-to-stand motions[15] Feasibility study of inertial sensor-based joint moment estimation method during human movements: A test of multi-link modeling of the trunk segment[17] Knee joint angle measuring portable embedded system based on inertial measurement units for gait analysis[85] Lower body kinematics estimation from wearable sensors for walking and running: A deep learning approach[86] Magnetometer robust deep human pose regression with uncertainty prediction using sparse body worn magnetic inertial measurement unitsMonitoring of Hip and Knee Joint Angles Using a Single Inertial Measurement Unit during Lower Limb Rehabilitation[87] AS MS OS ESTANG DIS JNT SGMNovel approach to ambulatory assessment of human segmental orientation on a wearable sensor system[26] and Virtual-Sensor Based Method for Estimation of Lower Limb Gait Posture Using Accelerometers and Gyroscopes [Pose estimation by extended Kalman filter using noise covariance matrices based on sensor output[142] Kinematics from Wearable Sensor Data in People with Knee Osteoarthritis and Clinical Considerations for Future Machine Learning Models[115] Prediction of lower limb kinetics and kinematics during walking by a single IMU on the lower back using machine learning[27] estimate of body kinematics during a planar squat task using a single inertial measurement unit[93] Reconstructing an accelerometer-based pelvis segment for three-dimensional kinematic analyses during laboratory simulated tasks with obstructed line-of-sight[28] Reconstruction of angular kinematics from wrist-worn inertial sensor data for smart home healthcare[94] AS MS OS EST ANG DIS JNT SGM Rigid body motion capturing by means of a wearable inertial and magnetic MEMS sensor assembly-from reconstitution of the posture toward dead reckoning: An application in biologging [Sensorial system for obtaining the angles of the human movement in the coronal and sagittal anatomical planes[97] Shoulder and elbow joint angle estimation for upper limb rehabilitation tasks using low-cost inertial and optical sensors[98] The head mouse -Head gaze estimation 'In-the-Wild' with low-cost inertial sensors for BMI use[146] The manumeter: A wearable device for monitoring daily use of the wrist and fingers[32] The online estimation of the joint angle based on the gravity acceleration using the accelerometer and gyroscope in the wireless networksThe use of synthetic IMU signals in the training of deep learning models significantly improves the accuracy of joint kinematic predictions Three dimensional gait analysis using wearable acceleration and gyro sensors based on quaternion calculations[104] Time coherent full-body poses estimated using only five inertial sensors: Deep versus shallow learning[147] Visual and quantitative analysis of lower limb 3D gait posture using accelerometers and magnetometers[154] Wearable inertial sensor system towards daily human kinematic gait analysis: Benchmarking analysis to MVN BIOMECH[109]

TABLE I :
Databases consulted in the literature search

TABLE II :
Outcomes of the analysis of inputs used in the ML algorithms, algorithms applied in each work, and outputs aimed as targets with the capture system employed.SF : specific force, TR: turn rate, OR: orientation, Cap.: capture, SNN: Shallow Neural Network, DNN: Deep Neural Network, vs.: versus