Training Data Generation Using Human Link Model for State Estimation of Care Robot User

The importance of care robots is growing owing to the increasing numbers of the elderly and the shortage of caregivers. For robots to automatically assist the movements of the elderly users, the state of their pose should be estimated. Hence, we proposed a method for estimating the user’s state using candidate positions of the center of gravity of the robot user using a few sensors. In this method, sensor outputs in all states and movements of the user are collected in advance, and a state estimation and abnormality detection model created by machine learning using the relationship between the outputs and state is used as training data. However, it is difficult to collect training data pertaining to elderly people, before use, for this purpose. Therefore, a database of human actions is created for use as the training data. Rich databases of such actions performed sans robots already exist, but not where care robots are used. Conventional methods for simulating human body movements to estimate human states require actual data and expert coordination and do not address detailed state estimation for physical support. Therefore, this study proposed a method for generating training data using a human link model, which enables the creation of a model that can estimate the state of a robot user without requiring the elderly user to stand up and perform other robot-assisted actions before use. Training data is generated using a link model and candidate centers of gravity, a simple method by which the state of the care robot user can be estimated, and physical support for standing, walking, and sitting can be provided. The effectiveness of the state estimation using the link model generated training data is verified off-line using sensor data independently obtained from the actual movements of the robot and user and also through experiments using an actual care robot. The results validated the state estimation since the time error was sufficiently short (0.35–1.85s), and the experiment confirmed that the robot could realize assistive actions with 90% accuracy.


I. INTRODUCTION
The arrival of an aging society combined with a shortage of caregivers have increased the demand for and importance of care robots. Therefore, various robots have been developed to support the physical movements of the elderly.
The associate editor coordinating the review of this manuscript and approving it for publication was Mauro Tucci .
Lift-type [1], [2], wearable [3], and humanoid robots [4] have been developed for standing support. The humanoid robot in [4] can assist an elderly person in standing up and transferring to a wheelchair by holding the person in its arms, a system that takes advantage of the robot's great power. However, if the robot does all the moving, the elderly will be unable to move autonomously, leading to a decrease in both muscle strength, and sense of agency, further resulting in increased discomfort and a decreased sense of well-being. Therefore, the recent years has seen an increase in the use of lift-type robots, which lift only the upper body [1], [2]. For walking support, wearable [5], walker-type [6], [7], [8], and cane-type [9], [10] robots have been developed. The wearable system in [5] is a walking support system that utilizes the residual muscle strength of the elderly person with the minimum force necessary, making good use of spring force when the legs are extended. Walker-type and cane-type robots are also systems that utilize residual muscle strength, with the person taking the initiative in walking and the robot acting as a support. They help reduce muscle weakness and are small and easy to install in ordinary homes.
Although these robots are suitable for supporting each of these movements, other robots are needed to support other movements. However, in daily life, the robot used cannot be changed in the middle of a movement because each movement, such as standing up and walking, is performed in succession. To address this issue, studies on robots that can support walking, standing up, and sitting down are being conducted [11], [12]. The robot in [11] can assist the user in standing up by moving the handles while pulling the vest worn by the user to assist in leaning forward, and afterwards, the same robot can assist in walking. The robot in [12] is similar to the one in [11] but has the advantage that the user does not need to wear a vest.
To support various actions such as standing up and walking with a single robot, the robot should recognize the current state of the user and switch to the appropriate function. Moreover, it should prevent accidents by detecting anomaly states, such as when the robot user is about to fall. To this end, several care robots that operate by estimating the user's state using sensors such as distance sensors, cameras, and force sensors have been developed [13], [14].
However, using numerous sensors to accurately estimate the state of the robot user has several limitations. Complex systems with many sensors are expensive and lead to increased failures and defects. Privacy issues become serious when the number of sensors used is considerable, cameras are used, or essential information obtained is leaked [15], [16]. Additionally, users may not understand complex systems, causing anxiety [17], [18].
The position of the center-of-gravity (CoG) is receiving attention for estimating the human state with fewer sensors. It can be used to obtain a variety of information, e.g., fall risk. The CoG position has been measured with large systems such as motion capture systems and force plates, as well as with inertial measurement units (IMUs), which have been used to analyze human movements such as walking [19]. In care robots, devices such as IMUs [20], position-sensitive detectors, and LRFs [21], [22], have been used to calculate the user's CoG position. However, although the CoG position can be measured with relatively few sensors, a certain number of sensors are still required for accurate measurement, and further reduction is desired. Therefore, we proposed a method based on the idea that the state of a robot user can be estimated from the approximate position of the CoG, even when the exact CoG position cannot be calculated, and we calculated the candidate CoG positions. Taking the rotation range of the joints of the human body into consideration, candidate CoG positions that represent the approximate range where the CoG position exists can be calculated using fewer sensors than those originally required to uniquely calculate the CoG position [23]. This method uses a human link model where the literature [23] shows that candidate CoG positions can be calculated even when the sensor arrangement is changed. Therefore, this method can be applied to various types of robots as long as they measure the position of some part of the human body.
Human state estimation and anomaly detection [24], [25] using machine learning and deep learning methods have been studied based on information about the human body, including CoG position (e.g., k-nearest neighbor methods and artificial neural networks). Machine learning is used in care robots [13], [26] as well as rehabilitation robots [27]. A state estimation method employing support vector machines (SVMs) has been proposed to estimate the user's state using CoG candidates [24]. At the time of introducing the robot, data on the user's standing, walking, and sitting movements with the robot are measured in advance. These data are used as training data to enable estimation and support according to the user's characteristics. Using these methods, we have developed and confirmed the effectiveness of a compact care robot that operates based on state estimation using only a few simple sensors. The care robot also provides assistance using the residual muscle strength of the robot user [29].
Measuring the user's movement data in advance, for state estimation based on machine learning is challenging for the elderly. Prior measurement of data for anomaly detection as in [29], is undesirable. The burden of pre-measurement can be reduced by using a database of human movements. However, existing databases are limited to movements with no robot support.
If human behavior could be generated by simulation, it would solve the problem of insufficient training data. Hwang et al. [30] proposes a method that uses pose data and RGB data generated by simulation and actual measurement data of various movements of elderly people. This method also uses actual data, and estimates states such as eating and telephoning. It does not estimate transition states during movements, such as sitting or leaning forward when sitting. Studies of standing support using simulations of the movements of care robot users [31], requiring adjustments by a physical therapist are not suitable for application to individuals at home. This method uses the normal center of gravity position in standing movements to set the range and does not estimate segmented states such as sitting and standing, nor does it consider the use of machine learning as training VOLUME 11, 2023 69311 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. data. Thus, conventional state estimation uses, as training data, pre-measured or simulation data together with actual measurements or expert adjustments. Therefore, in this study, we proposed a method for generating CoG candidate training data using a human link model. This study aims to estimate the state of a care robot user without measuring the user's movements with the robot beforehand, and to assist in standing, walking, and sitting. The link model can reproduce various postures, such as sitting and standing, of the robot user. The link model can be computed by assuming the user's posture in each state and deciding the range of joint positions. Using the candidate CoG positions obtained by calculating expected sensor data from the link model, as the training data, an SVM can be used to estimate the user state. As this method is based on the link model, it can be adapted to changes in the user's physique or robot type by adjusting parameters. The proposed method can estimate the state of a care robot user and provide physical support without prior measurement of training data. The method of creating training data using a link model and candidate centers of gravity is simple to use. The link model and center-of-gravity candidates used in this study are generally available and can be applied to estimating the human state in care robots, monitoring systems, and industrial robots that work near humans, as long as the system is equipped with sensors that can locate the human body. In this study, we validate the SVM model trained on the data generated from the link model by performing two types of validation tests to confirm the effectiveness of state estimation. One is an off-line simulation of state estimation using separately measured sensor data as input. The other is an experiment in which the SVM model, trained with data generated by the proposed method, is employed in a care robot to estimate the user's state in real-time and provide support based on the estimated results.
The rest of the study is organized as follows: Section II summarizes related works. Section III describes the care robot and the state estimation method used in this study. Section IV describes the proposed method for generating the training data using the link model. Section V describes the two types of validation and their results to verify the effectiveness of the proposed method. Section VI discusses the validation results. Finally, Section VII summarizes the study.

II. RELATED WORKS
Care robots should recognize the user's state and function appropriately to support various movements such as standing, walking, and sitting. Care robot users do not always move predictably and sometimes make anomalous movements such as falling. Therefore, care robots must also perform anomaly detection.
The walker type robot in [13] measures the user's gait by using Laser Range Finders (LRFs). The cane robot in [10], [14], and [32] can estimate the user's falling by using information about the user's legs and force information which can be measured by using LRFs and the force sensors on the handles. It can then prevent the user from falling by stopping itself. Methods to calculate the CoG position using IMUs and Position Sensitive Detectors (PSDs) as simple sensors have been proposed. Human state estimation and anomaly detection have been studied using machine learning and deep learning.
Fall detection using human body information measured by using Kinect was proposed in [24]. It detects human falls using the k-nearest neighbor method by learning 120 movement data. Although falls cannot be detected in some cases, the overall accuracy is over 90%. This method requires measurements to be made from a distance and can be useful in monitoring systems. However, it is unsuitable for use by care robots that directly support physical movements.
The method in [25] acquires acceleration, angular velocity, and other data from a human smartphone, and combines it with machine learning and deep learning to detect falls. Population neural networks, k-nearest neighbor method, SVM, and ensemble bagged tree were compared, and SVM and ensemble bagged tree achieve a fall detection rate of over 95%.
In the field of care robots which physically assist the user, state estimation based on machine learning has been researched as in [13]. The walker type robot in [13] uses the user's body information obtained from LRFs for gait estimation based on Viterbi Algorithm using Particle Filter, Probabilistic Data Association, and interacting Multiple Model and fall detection based on SVM. The cane robot in [32] estimates the user's gait based on fuzzy systems using measured data.
Machine learning and deep learning methods require sizeable training data, which cannot be acquired by straining the elderly. While there are databases of human movements, there is an insufficiency of databases of human movements using care robots.
If human movement using care robots is generated by simulation, the lack of training data can be met. Hwang et al. [30] proposes a method that uses pose and RGB data generated by simulations of various movements of elderly people and actual measurement data. By creating composites of 2D and 3D human posture and RGB data for movements such as eating and talking on the phone and then combining them with the actual measurement data of the elderly and young healthy people, state estimation can be performed with high accuracy even if the original measurement data on the elderly is small. However, this method also requires actual data, even though it is only a small amount. The data used for state estimation also requires a wealth of data, including 2D and 3D pose data and RGB data.
Research using simulations which focus on the movements of standing support care robot user exists [31]. Although not usable as training data, the trajectory of the center of gravity in standing movements is generated by simulation and further adjusted by the physical therapist to create a range of appropriate trajectories of the center of gravity during standing movements. This method requires adjustments by a physical therapist and is unsuitable for application to individual characteristics at home.

III. CARE ROBOT AND USER-STATE ESTIMATION
Section III identifies and defines the movements and states to be assisted in this study. It describes the care robot and state estimation method used for this purpose.
A. MOVEMENT TO BE ASSISTED AND STATE TO BE ESTIMATED Standing up, walking, and sitting down are three types of frequent movements. Therefore, these movements of the elderly should be supported. In a normal standing posture, a person leans forward to shift the CoG to the feet and then lifts the body to stand up. However, the elderly have difficulty lifting their bodies because of muscle weakness. In this study, a person leans forward with both arms resting on the robot's armrests, as shown in Fig. 1(b), and the armrests rise, as shown in Fig. 1(c), to assist the person in standing up. To provide this support, the user's normal sitting position ( Fig. 1(a)) and leaning forward in a sitting position ( Fig. 1(b)) should be differentiated. In the sit-down motion, the user leans forward ( Fig. 1(e)) from the standing posture shown in Fig. 1(d) and then lowers the armrest ( Fig. 1(f)) to support a stable sit-down motion. Therefore, the normal standing posture ( Fig. 1(d)) and standing posture with forward-leaning ( Fig. 1(e)) should be differentiated.
However, there is a possibility that the user will be in a posture that is not normal when performing standing, walking, and sitting movements. As shown in Fig. 1(g), the user may be unable to stand up because the armrest is elevated but the hip does not release, or conversely, the user may not be able to sit down because the armrest is lowered but the knees do not bend ( Fig. 1(h)). In such cases, the vertical movement of the armrest should be stopped, and its original position regained to prevent the user from assuming an unreasonable posture and to allow the user to stand up and sit down again. Therefore, normal standing ( Fig. 1(c)) and abnormal standing ( Fig. 1(g)) should be differentiated, as well as normal sitting ( Fig. 1(f)) and abnormal sitting ( Fig. 1(h)). As shown in Fig. 1(i), the user's legs may be unable to keep up with the robot while it is walking, and the user may fall over. In this case, the wheels should be stopped to prevent the user from falling over, and then, the robot should be moved backward to return to a normal gait. For this assistance, normal standing ( Fig. 1(d)) and anomaly walking ( Fig. 1(i)) should be differentiated.

B. CARE ROBOT USED IN THIS STUDY
Standing up, walking, and sitting down are frequently performed as a series of actions in daily life. Therefore, a single robot should be able to support these movements. As shown in Fig. 2, we use a care robot developed in the literature [28]. This robot can support standing, walking, and sitting, although it is small in size. It is a walker-type robot with an armrest, and the armrest moves up and down with a linear actuator. As explained in Section III. A, the armrest can be raised when the user is in a sitting and leaning forward state to support the robot user in VOLUME 11, 2023 standing up. When sitting down, the armrest can be lowered while the user is in standing and leaning forward states to support them in lowering their buttocks stably to the seating area. The armrest is adjusted to the average height of the elderly Japanese.
Two driving wheels and four casters are attached to the base of the robot, enabling the user to walk supported by the robot. The robot is small enough to be easily moved indoors, and its wide base and armrest make it difficult for it to fall in any direction. Brakes are attached to each driving wheel and armrest to stop the actuators when the user is in an abnormal standing or sitting position or in an abnormal walking position where the user is about to fall over. The robot can also present information about what the robot is doing using a display and speaker [17], [33]. A countdown before the armrest movement can be used with a speaker to inform the user of the timing of the movement [34]. These interfaces provide a sense of ease to the user. Notably, some studies [33], [34] have shown that the robot can be used without discomfort even if there is a time lag between the state estimation and the robot's movements. The specifications of the care robot are listed in Table 1. To estimate the user's forward-leaning in a sitting or standing position, as well as anomaly states during standing, sitting, and walking, a few simple sensors are employed.
• Pressure sensors: Four pressure sensors are mounted on the armrest and gripper. The sensors detect whether the users place their hands or arms on the gripper or armrest.
• Distance sensors: One on the user's side of the armrest and one on each side at a medium height on the primary unit. The sensors measure a point on the user's body or knee. The position of the sensor is fixed; thus, the measurement is closer to the knee. However, as no significant difference between the two is observed, we treat this as the knee position. The robot obtains data from these sensors in a 0.1 s cycle. The user should have both hands and arms on the armrests when being assisted. Therefore, we assumed that the care robot users would place both hands and arms on the armrests. However, if no contact is detected by the pressure sensor, the robot stops and does not assist. Fig. 3 shows the human link model used in this study. This model consists of six rigid links: forearms, upper arms, upper body (including the head), thighs, shanks, and feet. The origin of the coordinate system is the point of contact with the ground at the rearmost end of the robot. The forward direction of the robot is the positive y-axis direction, and the upward direction is the positive z-axis direction. If the care robot users fall either to the left or right, their hands or arms will move from the armrest. Therefore, the robot can detect this anomaly using the armrests' pressure sensors. The sagittal plane is used in the model, as the supported movements and the estimated states are symmetrical. When each robot is used by a single user the length of each link for the user should be registered on the first use. This operation is relatively easy. For a person of average physique, link lengths can be set based on height alone. Therefore, in this study, we assume that the lengths of these six links are known. For the weight balance of each link, which is required for the CoG calculation, the values obtained in the literature [35], [36] are used. The CoG position can be obtained by the parameters of the link model which are identical to the joint positions. The wrist and elbow joint positions can be calculated because the robot can detect the user's hand and arm placements using pressure sensors as mentioned previously in section III-B. The positions of knee joints and a point of the user's body can likewise be detected by using the distance sensors. However, the CoG position cannot be uniquely calculated using only the sensors described in Section III-B because the information is insufficient. Therefore, we propose the following method to calculate the CoG position candidates.

C. CALCULATION METHOD OF COG CANDIDATES
An illustration of the computational procedure we employed for the CoG candidate is shown in Fig. 4. The sensor can measure the positions of the wrist, elbow, and knee joints and one point of the upper body link (Fig. 4(a)). The position of the shoulder cannot be determined, but it should be located as far from the elbow as the length of the upper arm. Therefore, considering the range of rotation of the elbow joint, the shoulder joint is located in a fan-shaped arc centered on the elbow joint, as shown in Fig. 4(b). Let the position of the elbow joint be (y el , z el ), the rotation angle of the elbow joint be θ el , the maximum and minimum values of the elbow joint angle be θ el max and θ el min , and the length of the upper arm link be l ua , the shoulder joint candidates (y sd , z sd ) can be expressed by the following equation.
(y sd , z sd ) = (y el + l ua cos θ el , z el + l ua sin θ el ) The maximum and minimum joint angle values are determined from literature values in [37] and measurements obtained in previous studies [23]. The upper body link has the shoulder joint and hip joint as endpoints and can be considered a line segment passing through one point on the upper body measured by the sensor (y bd , z bd ). If we focus on one of the shoulder joint candidates, the equation of a line passing through one point on the upper body is expressed as: Therefore, the corresponding hip joint candidate y hp , z hp is located as shown in Fig. 4(c), and it is expressed by the following equation.
where l bd is the length of the upper body link. Like the shoulder candidates, the ankle candidates are located in a fan-shaped arc around the knee joint, as shown in Fig. 4(d). Considering a set of shoulder and ankle candidates, all the link parameters are known. Using the mass for each link m i (where i is each link), and the position of the center of mass for each link (y i , z i ), we can compute the corresponding CoG position candidate (Fig. 4(e)) by the following equation.
By calculating this for the other shoulder and ankle candidates, all the CoG candidates can be calculated (Fig. 4(f)). In this study, the CoG candidates are calculated as a point cloud by discretely considering the range of motion of the joints and are used to estimate the state of the robot user.

D. STATE ESTIMATION METHOD
By estimating the robot user's state, appropriate support can be provided according to the situation. Therefore, in this study, the user's state is divided into the nine states shown in Fig. 1 in section III-A. As explained in Section III-A, all nine states need not be differentiated simultaneously, only two need be considered at a time, such as the normal sitting posture and the sitting posture with forward-leaning, depending on situation. Fig. 5 shows the transition of the user's state and the corresponding support actions. When the armrest is in a low position, the system estimates whether the user is in a sitting or forward-leaning sitting state. The forward-leaning sitting state is the state when the user has finished the leaning motion, so leaning motion is now included in the sitting state. If the user is in a forward-leaning sitting position, the system raises the armrest. During the elevation of the armrest, the user's state, whether a normal or an anomalous standing state, is estimated. When the armrest is at its highest position, it first determines whether the user is in a normal standing state or is about to fall. Abnormal walking that may cause a fall needs to be detected, so the normal standing state includes the state when the user is walking normally. In the case of normal standing, it determines whether the user is leaning forward or not. In the case of a sitting down, the system determines whether the user is in a normal or anomaly sitting state. Thus, state estimation is performed by binomial discrimination in which estimation is made from two states depending on the situation.
In estimating the user's state using the CoG candidates, an SVM is employed because the number of CoG candidates varies depending on the user's posture. Moreover, the points cannot simply be compared. Therefore, to geometrically capture the distribution of the point cloud, the maximum, minimum, and average values in the y-and z-directions and the integral value of the range are used as feature values. These feature values are normalized and used. An SVM is suitable for estimation by automatically weighting the features that are significant for state discrimination. It can learn and create models even with relatively small amounts of training data. Moreover, earlier studies have shown that it can estimate states with sufficient accuracy without adjusting the parameters. Therefore, hyperparameters are not adjusted in this study as well. The software used for SVM is libsvm [38], which performs binomial discrimination. The RBF kernel is used for all state estimations. A state estimation method using the CoG candidates enables the robot to estimate the user's state and provide motion support with a few sensors. The effectiveness of the method has been validated in the literature [29], [39]. The SVM learned the candidate CoG features by performing ten movements mimicking normal standing, walking, and sitting, as well as each of these abnormal conditions. By using this SVM model, we confirmed that the robot can estimate the state in real time and assist the user. The CoG candidates can also be applied to other types of robots, as shown in [23]. However, this method requires that the training data for training the state estimation model be experimentally obtained for each user. The user's posture changes when the robot is used and when it is not used. Therefore, the database of behavior with no care robots cannot be used as training data. As we assumed that one robot is used per user, data could be measured before the first use. However, measurement is burdensome for the user and cannot be obtained for anomalies. Therefore, in Section IV, we propose a method for generating training data using a link model to realize a userstate estimation method that does not require data collection before first use of a robot.

IV. METHOD FOR CREATING TRAINING DATA
This Section describes the proposed method for obtaining training data. In this study, state estimation is performed using the CoG candidates calculated using a human link model, as described in Section III. We proposed a method to create these CoG candidates using the human link model without measuring movements such as standing up in advance.
This study uses a six-link human body link model as shown in Fig. 3 in Section III-C. In this study, the head is assumed to be an extension of the shoulders of the upper body, and the soles of the feet always face the ground. The link lengths are determined according to the target user. To compute this link model, we only need to determine the positions of joints that are the endpoints of the links. As the forearms are placed on the armrests, the y-directional positions of the wrists and elbows are fixed, and the z-directional positions depend on the height of the armrests. Therefore, by determining the range of values that can be obtained in each state for the four joint positions of the shoulder, hip, knee, and ankle, data for the human body in each state can be obtained.  forward-leaning states, respectively. In the sitting state, the zcoordinate of the waist does not vary significantly depending on the height of the chair. However, the y-coordinate has a slight range depending on the sitting position, as shown in Fig. 6. In addition, the y-coordinate of the ankle varies depending on the amount of leg pull. In the sitting state, the z-coordinate of the ankle is assumed to be constant because the feet are never lifted off the ground. As the link lengths are known, if the positions of the hips y hp , z hp and elbows (y el , z el ) are determined, the positions of the shoulders (y sd , z sd ) are geometrically determined as expressed by the following equations.
cos θ sd = y 2 21 + z 2 21 − l 2 bd − l 2 ua 2l bd l ua (7) θ bd = atan (l bd + l ua cos θ sd ) y 21 + l ua sin θ sd z 21 −l ua sin θ sd y 21 + (l bd + l ua cos θ sd ) z 21 (8) (y sd , z sd ) = (l bd cos θ bd , l bd sin θ bd ) where θ sd is the rotation angle of the shoulder joint, θ bd is the angle of the upper body link, l bd is the length of the upper body link, and l ua is the length of the upper arm link. Moreover, if the positions of the hips and ankles are determined, the positions of the knees can be determined. From these calculations, the ranges of the hip and ankle positions automatically determine the ranges of the shoulder and knee joint positions. Therefore, these conditions provide a range of four joint positions to be determined, and a data set of the link model for the sitting state can be obtained. In the sitting forward-leaning state shown in Fig. 7, the hips and shoulders are positioned more anteriorly than in the sitting position, and therefore the knees are also positioned more anteriorly. Thus, the range of joint positions differs from state to state. By considering the range of joint positions for each state, a link model can be created for each state. As actual humans have thick bodies, the expected sensor data for each link model can be obtained by considering the thickness of the human body for the link model created in this manner. The robot in this study uses sensors to measure the position of the knee and the position of one point of the upper body link. As the position of one point of the upper body link changes depending on the posture, the equation of the line representing the upper body link is calculated, and the value of the y-coordinate corresponding to the height of the sensor is calculated, considering the thickness of the body. From the pseudo-sensor data obtained in this manner, the CoG candidates are calculated, and training data is created. This training data is trained as in Section III-D to create a state estimation model.
By using the link model, training data corresponding to various states can be created. However, if sensor data are used as training data from the beginning, people with various body types cannot easily be reproduced. This method can reproduce the human posture in each state for people of various body types by considering the link model. Moreover, it can create training data by calculating sensor data back from the posture. Since link parameters such as link lengths change for different body types, it is necessary to recalculate the link model according to the body type and create the training data. For example, in the case of sitting or sitting forward leaning, the position of the sitter in the chair does not differ greatly, so there is no significant difference in the range of the hip position, but if the fixed elbow and hip positions are the same, the shoulder joint position will differ due to differences in the link lengths of the upper body and upper arm. Similarly, even if the leg lengths are different, there is no significant difference in the position of the feet because they are somewhat closer to the chair, but the position of the knees varies with body size due to differences in the length of the thighs and shanks. Currently, model parameters such as link lengths and appropriate joint position ranges are set according to body type to accommodate multiple body types. However, normalization and other methods will facilitate the application of this method to people with various body types.

V. VALIDATION OF STATE ESTIMATION
To confirm the validity of the proposed method for creating training data, two types of validation were conducted using state estimation models trained on these data. The first is validation by off-line state estimation simulation using independently measured sensor data as input, and the second is validation by real-time state estimation and support experiments using a care robot.

A. OFF-LINE VALIDATION USING INDEPENDENTLY MEASURED SENSOR DATA 1) VALIDATION METHOD
This section describes the validation by off-line state estimation simulation using independently measured sensor data as VOLUME 11, 2023 input. We use sensor data from a care robot that enables the user to stand up, walk, or sit down normally and those from a case in which anomalies occur in the process, as input data for validating the model. The SVM model was trained using the training data created from the link model using the method proposed in Section IV. The measured data was input into the training model for state estimation.
Verification was performed with four participants. Participants were young healthy subjects in their twenties and thirties, 166-172 cm tall, and weighing about 50-63 kg. The link lengths and body thicknesses of participant A are shown in Table 2. The link lengths and body thicknesses of the link model were determined by measuring each participant's body part with a tape measure applied by human hands. Although the participants were young healthy subjects, there was no significant difference in movement between young healthy subjects and the elderly because the robot assists the elderly to perform normal standing, walking, and sitting movements. Therefore, it is thought that even young healthy persons can simulate the behavior of the elderly and validate robot behavior. Since the robot size of the is designed for the elderly, the height of the armrest during walking is low for a young healthy person, but this allows the participants to simulate the bent posture of an elderly person. The state estimation was performed to estimate forwardleaning in the sitting and standing positions and detect anomalies in standing, sitting, and walking. The results of the state estimation were compared with the actual state to evaluate the estimation performance. The actual user state was determined visually from a video of the measurement, which was taken as the true value.   transition from a sitting position to a leaning forward position is estimated earlier than it actually is. Because the forwardleaning sitting state is the state when the user has finished the leaning, so the normal sitting state includes leaning motion. Therefore, even in the sitting state, the end of the forward leaning movement is very close to the forward-leaning sitting state. Therefore, it is thought that the forward-leaning sitting was estimated prior to the actual transition to the forwardleaning sitting state. The time error for the transition from normal sitting to forward-leaning is −1.1 s. A positive time error indicates that the estimated state transition time is later than the actual state transition time. Similarly, a negative error indicates that the estimated state transition time is earlier than the actual state transition time. Fig. 9 shows the results of forward-leaning estimation in the standing position. The light blue and purple backgrounds indicate the normal standing state and forward-leaning in the standing position, respectively. The time error of the estimation is −1.2 s.

2) VALIDATION RESULTS
The results of anomaly detection in standing, sitting, and walking are shown in Figs. 10-12. The green background in Fig. 10 indicates normal standing up, red indicates abnormal standing up, yellow in Fig. 11 indicates normal sitting down, red-purple indicates abnormal sitting down, light blue in Fig. 12 indicates normal standing, and gray indicates anomalies in walking, such as almost falling. Since there is no significant difference between normal and abnormal standing and sitting immediately after the start of the armrest raising and lowering motion, even abnormal standing and sitting are initially treated as normal standing and sitting, as shown in Figs. 10 and 11(b). Notably, no incorrect estimations were recorded except for the discrepancy in the transition times. The estimation time errors were −0.1 s, −0.1 s, and +0.1 s. Since the state estimation is performed in 0.1 s cycles, the estimation is performed with almost no time error. The results of the average time errors of the four participants in the validation exercise are summarized in Table 2. Although a slight time error occurred in each of the estimations, the estimations were accurate enough to support the robot.

B. STATE ESTIMATION AND SUPPORT EXPERIMENTS USING CARE ROBOT 1) EXPERIMENTAL METHOD
In this section, we describe a validating experiment using an actual care robot. In this study, a series of actions of standing up, walking, and sitting down is assumed as a series of movements. The participant in this experiment performs these actions continuously using the care robot. In addition, to validate the ability of the state estimation model trained on the data created by the proposed method to detect anomalies, the participant simulates anomalies in the middle of each movement. The robot estimates the state of the participant using the learned model based on the training data created from the link model by the proposed method. It performs support actions according to the state of the participant. From the viewpoint of stability, the robot judges that a state transition has occurred and performs a support action if the state continues for 0.3 s after the state estimation result changes.
An overview of the experiment is shown in Fig. 13. The participant and robot perform the following actions continuously.
1) First, the participant begins by sitting in the chair on the left in Fig. 13 with the robot in front of him.
2) To stand up, the participant leans forward with both arms and hands on the armrests. 3) Once the robot estimates the participant's forwardleaning, it counts down audibly and then raises the armrest. 4) At the first standing, the participant does not stand up to simulate anomaly, and only the armrest is raised. 5) The robot stops after estimating the anomaly and returns the armrest to its original position. 6) The participant leans forward as in (2), and on the second standing, the participant rises normally from the chair as the armrest rises. 7) After standing up, the participant walks forward (to the right in Fig. 13) while being supported by the robot. 8) While walking, the participant simulates an anomaly, and only the robot moves forward while the participant's legs cannot keep up with the robot. 9) The robot stops after estimating anomaly walking and then backs up to bring the participant to a normal standing posture. 10) The participant then continues walking normally for approximately 1.5 m to the front of another chair, on the right side of Fig. 13. 11) The participant reaches the front of the chair sideways, so the participant turns to the right on the spot while backing slightly so that the participant can sit in the chair. 12) The participant leans forward against the armrest to sit down on the chair. The robot lowers the armrest upon recognizing this movement. 13) At the first sitting, the participant does not sit down to simulate an anomaly, and only the armrest is lowered. The robot estimates the abnormal sitting and then stops and moves the armrest to the original position. 14) The participant again leans forward from the standing position, and in the second sitting, the participant sits normally as the robot lowers the armrest.
The participant conducts the turning and backing motion in the procedure 11. Although these movements are strictly different from walking straight ahead, the relative position of the robot and the participant is not significantly different from that of walking straight ahead because the participant's movements are slower and the stride length is narrower in this experiment, which simulates an elderly person. Since previous work [29] has shown that these movements can be treated in the same way as a straight-ahead gait, this paper also treats this rotation and backward movement in the same way as in the straight-ahead case. Through experiments using an actual robot, we evaluated whether the robot could accomplish a series of support actions without malfunctioning.

2) EXPERIMENTAL RESULTS
Two participants performed the above-mentioned experiment five times. The participants were healthy subjects and gave informed consent before the experiments. The proposed system and experiments using the care robot were reviewed and approved by the ethics board of Toyohashi University of Technology. An example of participant A's experimental results of state estimation is shown in Fig. 14. Fig. 14(a), (b), and (c) show the estimated state of standing up motion including abnormal standing (procedure 1-6), walking motion including abnormal walking (procedure 7-11), and sitting down motion including abnormal sitting (procedure 12-14), respectively. The view of Fig. 14 is the same as in Fig. 8-12, with the background color representing the actual state determined visually from the video, and the black dots representing the estimated state. Although a slight time error occurs, almost no false estimation was recorded. The standing forward-leaning and sitting forward-leaning were misestimated immediately following the standing and sitting movements. However, this is because of the forward-leaning posture during standing and sitting. Such misestimations were of short duration and did not cause the robot to malfunction. Fig. 15 summarizes an example of the experimental results for a series of movements from standing to sitting. Figure 15(a) shows the state estimation results, and it is a summary of (a)-(c) in Fig. 14, which are continuous operations.
The armrest height is shown in Fig. 15(b), and the robot's speed and braking condition are shown in Fig. 15(c). The orange lines in Figs. 15(b) and 15(c) represent the armrest height and robot speed, respectively. The black dashed lines in Fig. 15 (b) represent the maximum and minimum armrest heights. When an anomaly is detected, the armrest stops in the middle of its movement and returns to its original height, as shown in Fig. 15 (b). The black dot in Fig. 15 (c) represents the braking state. If the black dot is on top, the brake is applied. Figure 15(c) shows that the brake is applied during abnormal standing, sitting, and walking. Additionally, the robot moves backward after stopping during anomaly walking. These results indicate that when an anomaly is detected, the actuator brakes and moves backward as expected.  Moreover, when the user is in a normal state, the actuator performs the prescribed operation normally without malfunction.
An example of another experiment is shown in Fig. 16. The view of Fig. 16 is the same as Fig. 15. Both the state estimation and support behavior were as good as in the previous example. In the other three experiments, the state estimation was performed with similar accuracy, and the robot performed the expected support behavior without malfunctioning.
Similar to participant A, in most of the experiments of participant B, the robot performed the state estimation and supportive actions as expected. However, in the second trial, the robot mistakenly estimated that the participant was walking anomalously during a normal gait and stopped. This mis-estimation occurred in experimental procedure 11, when the patient rotated and moved backward to sit on the chair on the right side of Fig. 13.
These results confirm that the robot can return the participant to a normal state even if the participant enters an anomaly midway with a high accuracy of 90% through the experiment. Moreover, it can continuously perform a series of support actions such as standing up, walking, and sitting down.

VI. DISCUSSION
Both types of verifications confirmed that, overall, the state estimation was accurate enough to move the robot.
When using a state estimation model trained on the data created by the proposed method, estimation errors occurred at the boundary times of state transitions. As the human state changes continuously there is no noticeable boundary between state transitions, and the time error of the state transitions is non-zero. Studies [34], [39] have shown that even with some time errors, the robot can provide appropriate assistance through voice guidance. As long as the time errors of the state transitions in state estimation are negligible, the robot can provide assistance. Takeda et al. [34] shows that voice guidance can be used to provide effective assistance for time errors that exceed 1 s. The current time error results are in the range of 0.35-1.85 s, and they are sufficiently small. We believe that the proposed method has demonstrated sufficient estimation capability.
By comparing the results of state estimation using the training data created by the proposed method with the case where actual measured data were used as the training data [29], we observed that the time errors in the former were larger. This is because the quality of the training data created using the link model is lower than that using actual measured data, owing to the inability of the model to fully represent the complexity of an actual human body. However, the major advantage of this method is that it does not require prior measurements of each movement, such as standing, and anomaly, whereas the conventional method requires ten prior measurements of each movement and anomaly.
The experimental results also confirmed that the robot can assist the user based on state estimation using the training data created from the link model. Multiple experiments also confirmed that the model trained on the data generated by the proposed method can be effectively applied, even if the robot user's behavior changes slightly.
For the experiments in which the robot stopped due to misestimation, it was found that the misestimation occurred during rotational and backward movements that were not included in the training data. In the rotation and backward motion, the legs tend to move away from the robot more easily than in the normal straight-line gait. Therefore, it is considered by the robot that the legs are further away from it than they should be. Although it is thought that actual elderly people do not move that much, we believe that this problem can be solved by creating training data that also considers rotation and backward movement. Failure to estimate an anomaly in walking may pose a risk of causing an accident. However, if the user can release the brake if the robot stops accidentally during normal operation, it is considered that the robot can be used without any problems.
Study [30] provides multistate estimations but with less than 85% accuracy. Studies [14], [31] have estimated falls with approximately 90% accuracy. Except for one experiment, this study could estimate and assist the robot without false estimation. Although the number of experiments is relatively small, the study is considered to be as effective as the state estimations which use actually measured data.
While state estimation using a model trained on the data generated by the proposed method is sufficient to assist, it was confirmed that the assistance based on estimations based on measured data is more accurate. Therefore, we believe that it would be suitable to initially use a model trained with the data created by the proposed method. Then, measurement data can be acquired, and the training model updated as the system is continuously used.

VII. CONCLUSION
In this study, we proposed a method for creating training data using a human body link model and experimentally validated the effectiveness of the same on a state estimation model. For training data, conventional state estimation models use either pre-measured or simulation data, with experts making adjustments. Therefore, in this study, by using the human body link model, human body data corresponding to each state were created without requiring the prior acquisition of sensor data for standing, walking, sitting, and anomaly states of the robot user. From these data, CoG candidates were calculated to create training data for estimating the care robot user's state. The effectiveness of the proposed method was validated by off-line state estimation using independently measured data as input and by an assistance experiment using an actual care robot. From the validated results, it was confirmed that the system can estimate the user's state and detect anomalies using an estimation model learned by creating machinelearning training data using the link model. In addition, based on the state estimated using the model learned from the training data created by the proposed method, the robot was used to assist the user in standing up, walking, and sitting down. These results confirm that the simple method of training data generation, using a link model and CoG candidates, can be used to estimate the state of a care robot user and provide physical support in place of prior measurements.
Although this is a preliminary experiment, as only young healthy participants were involved in the validation, we confirmed that the proposed method can be used to estimate the state of the user without using measurement data of actual movements. By initially using a model trained with this method and updating the training data by acquiring data while continuously using the system, estimation and assistance can be sufficiently tailored to individual characteristics. As the proposed method uses a human link model and CoG position, we believe that it can be applied by simply changing the model parameters even when using different types of robots or when the users have different physiques as in the elderly. In addition to care robots, the proposed system can also be applied to monitoring systems and industrial robots that need to estimate the state of human beings. In future studies, we will apply this method to people of various physiques by adjusting and normalizing the parameters. Although all the link lengths in this study were measured, we would like to simplify the method by calculating average link parameters based on height and weight. The height of the chair can also be freely set, and by taking environmental conditions into account, it is thought that state estimations can be better tailored to the actual site where the system will be used. By making these improvements, we plan to establish a state estimation method that can be generally applied without prior measurement of movements and validate it in experiments with the elderly.