Force Sensorless Admittance Control for Teleoperation of Uncertain Robot Manipulator Using Neural Networks

In this paper, a force sensorless control scheme based on neural networks (NNs) is developed for interaction between robot manipulators and human arms in physical collision. In this scheme, the trajectory is generated by using geometry vector method with Kinect sensor. To comply with the external torque from the environment, this paper presents a sensorless admittance control approach in joint space based on an observer approach, which is used to estimate external torques applied by the operator. To deal with the tracking problem of the uncertain manipulator, an adaptive controller combined with the radial basis function NN (RBFNN) is designed. The RBFNN is used to compensate for uncertainties in the system. In order to achieve the prescribed tracking precision, an error transformation algorithm is integrated into the controller. The Lyapunov functions are used to analyze the stability of the control system. The experiments on the Baxter robot are carried out to demonstrate the effectiveness and correctness of the proposed control scheme.


I. INTRODUCTION
I N the last few decades, robots have become widely used in various fields, such as industry, service and medical [1]- [5].The robot can not only improve the quality of life, but also can improve work efficiency and complete work that people can not finish under certain condition.However, traditional operating methods of robot usually need to use the external devices and softwares, and will bring inconvenience to the operator and reduce the production rates.An alternative method to make the robot interact with the human directly is letting robot learn human skills.
Traditional motion capture methods require operators to fix sensors on each joint of human body, but this will bring a lot of inconvenience [6].In recent years, the vision-based motion capture scheme for motion recognition provides us another idea to achieve this goal [7] [8].Because of its convenience and accuracy, this vision-based scheme has been widely adopted in robotics [9].This control scheme uses a camera to capture human motion, which can avoid operators wearing a large number of wearable accessories.In this paper, the camera used for motion capture is Kinect (version 2.0) developed by Microsoft Company [10] [11].Due to an RGB camera and depth sensor embedded in Kinect sensor, we can get 3D coordinates of each joint of human body.Based on this, we used a geometry vector based method proposed in [12] to calculate each joint angle of human arm and generate a desired trajectory.
In practical teleoperation control system, robots may encounter external force from the environment.One approach to achieve compliant behavior is impedance control.The concept of impedance in physical human robot interactions was introduced by Hogan [13].Nowadays, this approach has become a classical control approach in robotics.The core idea of the impedance control methodology is to map generalized positions and velocities to generalized force.When controling the impedance of a mechanism, we are controlling the force of resistance to external motions that are imposed by the environment.From a practical point of view, we usually view the behavior of the robot as the pose of the end-effector, which is defined in Cartesian coordinates.Typically, the Cartesian position and velocity is the input of the controller and the motor torque is the output.Another approach is the admittance control, which is widely used in industrial robots.As shown in Fig. 1, admittance control is the inverse of impedance: it defines motions that result from a force input.An admittance control architecture is able to receive external force in each joint as inputs and generate the new motion.Therefore, force sensors which are applied to receive external force have been widely used in admittance control systems.The general idea of measuring the external force is to fix force sensors on manipulators.However, these sensors added to the system are fragile and costly.For these reasons, related techniques of estimating external force have received great attention and various schemes have been proposed.In [14], early estimation methods for robot application have been presented.In [15], disturbance observer approaches based on motor torques, joint angles and velocities have been analyzed.In [16], a sensorless robot collision detection approach based on generalized momentum has been introduced.
Under admittance control, with the measurement of external force, a desired trajectory will be modified.Then, a modified desired trajectory is obtained and tracked.In teleoperation control systems, tracking precision is of great importance for robotic manipulation.Model-free control and model-based control are the two main categories of controlling a robot manipulator.Compared with the model-free control methods, the model-based control methods usually have better control performance [17].However, due to existence of uncertainties, it is hard for us to obtain an accurate dynamic model of a robot [18] [19].How to deal with uncertainties has become a core issue in control design [20].Generally, one of the most commonly used method is adaptive control without prior information of system parameters.In [21], adaptation laws are designed to handle parametric uncertainties of the system.In recent years, with the development of the neural networks technology, adaptive control schemes with neural networks have been widely employed in many systems [22]- [25].In [26], neural networks are integrated into control design to solve control problem in discrete-time systems with deadzone.In [27], adaptive neural control is used to achieve a good result with unknown prior knowledge of system dynamics.In [28], a novel adaptive control scheme is presented for an autonomous helicopter and a neural networks (NNs) mechanism is employed into system to identify the unknown inertial matrix.Neural networks have a variety of models, one of widely used network models is radial basis function neural network(RBFNN), which has a good generalization ability and fast learning convergence speed.In [29], RBFNN is used to estimate unknown functions in WMR system.In [30] [31], neural network has been applied to handle the system uncertainties to get a desired result.In [32], RBFNN is to approximate unknown dynamics in the robot system.In [33], RBFNN is employed to approximate unknown functions in nonlinear systems.Neural networks are also used in other areas such as image processing [34], function approximation [35] [36] and optimization [37].The system uncertainties also can be estimated by other intelligent tools such as fuzzy logic system, etc [38] [39].In practice, the rigorous precision requires that both the transient and steady performance should be taken into account.However, most general adaptive control methods can only guarantee the steady performance, while difficult to solve the transient problem [40].For this purpose, we use error transformation technique proposed in [41] [42] to govern the tracking errors into a desired level.
The contributions of this paper are presented as follows: (i) Combination of admittance control and the force observer shows an effective way to make the robot have a compliant behavior subject to the external force.
(ii) Kinect sensor is used to generate trajectory to teleoperate the robot.The error transformation technique and neural network are used in teleoperation system so that both transient and stable tracking performance are guaranteed.
(iii) Analysis of signals in the admittance control system are given to prove that all signals are bounded.
The rest of the paper is structured as follows.After giving the preliminaries of the system in Section II.Section III gives design and analysis of the control design.Experiment results are given in Section IV, before a conclusion is drawn in Section V.

II. PRELIMINARIES A. System Comfiguration
The teleoperation control system is shown in Fig. 2. Using the Kinect sensor, a desired trajectory will be generated.Without external torque from the environment, the robot will follow the trajectory of the operator.If external torque exists, the desired trajectory will be modified.The robot will track the modified trajectory affected by the external torque from the environment.

B. Human arm Geometry Vector Approach
Most geometry approaches are based on locations of the movements.With Kinect sensor, each joint of human body is represented by 3D point in the Kinect coordinate frame, which follows the right-hand rule, as shown in Fig. 3.The Kinect sensor is regarded as the origin of the coordinate frame and the Z axis is consistent with the direction of Kinect induction.The geometry model of human left arm is built in Fig. 3. Since the vector method is not applicable to the Kinect coordinate frame, we should map the coordinate frame of Kinect to the mathematical coordinate frame.From the skeleton data, we can transform two different points into a vector, which is in the mathematical coordinate frame.The transformation can be provided as where A(x 1 , y 1 , z 1 ) and B(x 2 , y 2 , z 2 ) are two different points in the coordinate frame of Kinect.
After the vector of the mathematical coordinate frame is obtained, based on the geometry vector approach [12], we can calculate desired angle values using the vector angle formula Calculation of Shoulder Angle: As shown in Fig. 3, the shoulder yaw ( DEA) can be obtained by calculating angle between plane OEA and OED.The shoulder roll is the angle of plane OEA and EAB.
The shoulder pitch angle ( OEA) is the angle between vector EO and EA which can be calculated by passing EO and EA into (2).
Calculation of Elbow Angle: There are two angles related with the elbow joint.Elbow pitch ( EAB).Elbow roll is the angle between plane EAB and ABI.
Calculation of Wrist Angle: Now we are coming to solve angle of wrist.The wrist yaw angle is the angle between lower arm and hand plane.The angle of wrist pitch can be viewed as the angle between vector − → X 5 and − → Y 7 , we can calculate it by employing following equations Until now, we get all seven joint angles.They are shoulder yaw, shoulder pitch, shoulder roll, elbow pitch, elbow roll, wrist yaw and wrist pitch, which can be defined as q d1 , q d2 , q d3 , q d4 , q d5 , q d6 , q d7 .

C. External Torque Estimation: Observer Approach
In this section, the way to estimate an external torque in joint space is using a force observer based on the generalized momentum approach.Compared with traditional methods requiring computation of joint accelerations or the inversion of the inertia matrix [43], this observer avoid reduce the computing burden and noise with the acceleration of joint angle.
The system dynamics can be described by where q ∈ R n and q ∈ R n denote the joint angle and velocity vector, C ∈ R n×n , M ∈ R n×n and G ∈ R n are the systematic dynamics, representing Coriolis matrix, inertia matrix and Gravity load respectively.τ ext ∈ R n is external torque on joints, τ the is motor torque on robotic arms.In [43], the generalized momentum is expressed as Its time derivative form Substituting ( 6) into (4), we have Then, the inertia matrix M and can be written as [44] Substituting ( 8) into ( 7) results in The advantage of this method is that equation ( 9) based on the generalized momentum does not involve joint angle accelerations q.In the end, the external torque can be modelled as where w τ is the uncertainty, w τ ∼ N (0, Q τ ).Usually, A τ is defined as A τ = 0 n×n .However, a negative diagonal matrix can reduce the offset of the estimation of disturbances.Then, equation ( 9) can be reformulated as where The above equations can be combined and reformulated in the state-space form ) where v is the measurement noise v ∼ N (0, R c ).It can be easily proved that this system is observable.Since the q and q are able to be measured, the generalized momentum defined in (5) can be regarded as a measurement.Then, a state observer is designed Solving the L is to design a gain matrix for the system, and L can be calculated as where the matrix P can be calculated by the algebraic Riccati equation (ARE) [45] where the Q c is the uncertainty of the state, written as A schematic overview of the force observer is shown in Fig. 5.As shown in Fig. 5, the output y = C c x(t) is compared with C c x(t).Their difference, passing through the gain matrix L, is used as a correcting term.If the gain matrix L is properly designed, the difference will drive the estimated state to actual state.From the above analysis, we can see that the estimation of states can be obtained from observer, which can be written as τext = 0 0 1 0 0 0 0 1 x p = 1 0 0 0 0 1 0 0 x

D. Admittance Control
In this section, the admittance control method using the estimated external torque is presented.We assume that the manipulator will modify its desired trajectory when the external torque is imposed on the robot.In this case, we use an admittance control to receive the external torque.Based on measurements of the external torque τext and the initial desired trajectory q d obtained from the Kinect, a modified trajectory q r is generated.Therefore, the controller has the causality of mapping τext to q r .Generally, an admittance model can be described as where q d ∈ R 7 is the vector of joint angles obtained from Kinect and q r ∈ R 7 is the vector of joint angles affected by external torque, f (•) is the mapping function.A simple admittance model is K d (q r − q d ) = τext , where K d is a positive constant.

E. RBFNN
Radial basis function neural network is an artificial neural network and has been widely used as function approximators in control engineering.It is proved that any smooth function can be approximated by the RBFNN within a compact set Ω [46].It can be expressed as follows where Z N N ∈ Ω ⊂ R m is the input vector, W is the weight matrix, l represents the number of neurons.S(Z N N ) = [s 1 (Z N N ), s 2 (Z N N ), ..., s 1 (Z N N )] T is the basis function of RBFNN and s i (Z N N ) is commonly chosen as Gaussian function with where the u i is the center of the node and σ i denotes the variance.
If the number of neurons l is sufficiently large, there is a weight matrix W * and an approximation error ε * (Z N N ) If the center of the node is chosen appropriately, the approximation error ε * (Z N N ) is bounded and could be minimized The ideal weight matrix W is unknown.In the practical system, the weight matrix W is replaced by the estimation Ŵ .Thus, the formula (20) can be described The weight estimation errors are W = W * − Ŵ .
III. CONTROLLER DESIGN The controller is designed to make the robot can follow the desired trajectory in the joint space generated by the Kinect, as shown in Fig. 2. The neural network is used to estimate uncertainties of the model and ensure the steady-state of the system.

A. Error Transformation
We define tracking errors of the manipulator where the v d will be defined latter.The objective is to make the actual joint trajectory q track the desired joint trajectory q d effectively.At first, we define a smooth and bounded performance function where the parameters ρ 0 , ρ ∞ and p are positive constants.To guarantee that the tracking error can meet the transient performance, we introduce the following transformation functions where the σ is positive constant.According to the function R i (•), if the η i (t) is bounded, the bounds of the tracking error e q (t) can be defined: −σρ(t) < e q (t) < ρ(t) with e q (t) > 0 and −ρ(t) < e q (t) < σρ(t) with e q (t) < 0. Therefore, the overshoot ∆ in transient phase is bounded by and the amplitude of tracking errors in stable state will be within in max[ρ ∞ , σρ ∞ ] and the maximum overshoot and undershoot of transient performance are bounded in [σρ 0i , −σρ 0i ].Usually, the settling time is the shortest time that the system achieve and maintain the steady state error within the 100% ± 5% range, then the settling time is less than (max(1, σ)/p)ln(ρ 0 − ρ ∞ /1.05ρ ∞ ).Therefore, we can control the transient and stable state of the system by setting proper parameters.From eq. (27a), we define Then, the desired joint velocity v d i (t) is designed as where the k 1 is the positive constant.We define a Lyapunov function where

B. Neural Network and Joint Velocity Control
The goal of joint velocity control is to make the velocity error e v (t) as small as possible.Substituting the differentiation of η i (t) into formula (4), we can obtain that where G (q) = G(q) + Ṗ (η(t))η(t) ρ(t) . Design the control torque τ = −k 2 e v − M (q) vd − Ĉ(q, q)v d − Ĝ (q) + τext (35) Applying NN approximation technique, we have where the W * T M , W * T C and W * T G are the ideal weight matrix.The estimation of M (q), C(q, q) and G(q) based on RBFNN can be written as Then, the dynamics can be rewritten as where M (q), Ĉ(q, q), Ĝ(q) are the estimation matrix.The right side of the equation can be expressed as W(•) S (•) , where the ) where e f = (τ ext − τext ).
The updating law of the weight matrix Ŵ is where the Θ and γ are positive constant specified by the designer.
We define the state variable ξ comprised of η(t), e v , W(•) , and it can be expressed as where is a positive constant.Conversely, V (ξ) > 0, ∀||ξ|| > .Let us choose 0 < V (ξ) < β < c, where β and c are positive constants.Define that Ω b = {V (ξ) ≤ β} and Ω c = {V (ξ) ≤ c}, and we have We see that V (ξ) with respect to time is negative over Ξ, that is In other words, the state variable ξ(t) that outside the set Ω b will enter into Ω b within a period of time, and can not escape Ω b because V is negative on/outside its boundary V (ξ) = β.as shown in Fig. 6.Theorem 1: Using the Uniformly Ultimately Bounded (U-UB), errors η(t), e v , W(•) will fall into the set Ω b , which is defined as As shown in Fig. 6, points on each axis of the coordinate are defined as From the above analysis, we can conclude that the ||η(t)||, ||W || F and e v are bounded.According to (25) and (26a), we can obtain the tracking errors e q can be bounded, which can guarantee the transient performance.Then, the q = e q + q d is bounded.

IV. EXPERIMENT STUDIES
In this section, experiments studies are given to demonstrate the effectiveness and correctness of the proposed control scheme.The experiment is based on the Baxter Research robot by Rethink Robotics, as shown in Fig. 7.The Baxter robot is a two-arms robot with 7 degrees of freedom (s 0 , s 1 , e 0 , e 1 , w 0 , w 1 , w 2 ).Each joint is driven by Series Elastic Actuator (SEA), which enable the robot have human-like behaviors.The robot is controlled and linked to a computer and runs on the Robot Operating System (ROS).
In the experiment, the robot is interacting with the environment and the external torque is applied at the end-effector.For the right arm of the robot, we initialized it in horizontal posture.Considering simplicity and generality, we use two joints (e 1 , w 0 ) and positions of other joints are locked in the experiment.The desired trajectory generated by using the Kinect sensor is input signal of the control system and will be modified by the external torque.

A. Test of Geometry Vector perfomance
Two kind of experiments are primarily implemented to test the performance of kinematics geometry vector based approach.In the course of the experiments, only the operator stands in front of the Kinect about 3 meters.The first experiment need the operator keep in a static position in front of the Kinect sensor and his left arm in the horizontal state.The experimental result is shown in Fig. 8, it is clear that the variation of the seven joint angles are smooth and accurate.Although there are some small fluctuations, the fluctuations are so small that can be ignored.The reason for fluctuations is that the points of the joints detected by the Kinect are not absolutely stable and the operator cannot ensure that the arm is completely stationary in the course of the experiment.
In the second experiment, the operator is in a dynamic action and reciprocate rotation of his elbow from origin position to   final position with a low speed.Using the same method, the data is sent to MATLAB for processing.
As shown in Fig. 9, it is obvious to find that the variation of elbow pitch angle is periodic and regular in line with the movement of the arm.Due to the jitter in the process of elbow movement, there will be some tiny fluctuations in the curve.The overall trend of angle is correct and satisfactory.According to the above two kind of experimental results, the overall performance is consistent with our expectation and satisfactory, which verify the correctness of our proposed method.

B. Test of Neural-Learning Tracking Performance
This set of experiments are mainly to demonstrate the effectiveness of the neural adaptive controller.The desired (f) The tracking performance of joint 2 with under controller in [49].(f) The tracking error of joint 2 with under controller in [49].trajectory is from the Kinect sensor.The desired trajectory is elbow pitch joint and wrist roll joint respectively, where t ∈ [0, t s ] and t s = 20s, as shown in Fig. 10.The initial values of joint angle are set to q 1 = 0 rad, q 2 = 0.1 rad and initial value of joint velocity: q1 = 0 rad/s, q2 = 0 rad/s.To guarantee transient performance, the parameters of the performance functions are set: ρ 0 = 0.

C. Test of Admittance Control Performance
The last experiment is mainly about the test of performance of admittance control.In the experiment, the external torque is set by the designer and applied at the manipulator from 6s to 16s.An admittance control is designed to track the modified trajectory affected by the external torque, which is estimated by the observer based on the generalized momentum approach.The experimental results are presented in Figs.14-17.As depicted in 14, the desired trajectory q d of joint w 0 will be modified by the external torque to enable the robot have a compliant behavior.The desired trajectory of joint e 1 will not be modified for the reason that the external torque is applied in the vertical direction and the trajectory of joint e 1 is in the horizontal direction.The tracking error under admittance control is shown in Fig. 16 and the estimation of the external torque is presented in Fig. 17.From the figures, the experimental results demonstrate the effectiveness of the proposed admittance control method.

V. CONCLUSION
In this paper, we proposed a sensorless control scheme for uncertain robot manipulator using neural networks.We used a kinematics geometry vector based method to calculate each joint angle of a human arm with Kinect sensor.The observer is used to estimate the external torque, which in turn is the input to admittance control.The error transformation method is used to ensure steady state performance and transient performance.The settling time, overshoot, and the final error can be achieved by changing the parameters of the error transformation functions.The RBFNN is employed to approximate the uncertainties of the manipulator dynamics in the system.Experiment results are provided to demonstrate the effectiveness of our developed methods.In the future, more effort will be taken to validate the proposed methods.

Fig. 1 .
Fig. 1.The diagram of impedance and admittance control

Fig. 2 .
Fig. 2. The diagram of the control system

Fig. 5 .
Fig. 5.The overview of force observer based on generalized momentum approach

Fig. 8 .
Fig. 8.The operator is in a static position and keep his right arm in the horizontal state.The right figure presents the variation of joint angle in successive frames.

Fig. 9 .
Fig. 9.The operator rotate his arm circularly with a low speed from origin position to final position.
The tracking performance of joint 1 with proposed controller.The tracking performance of joint 2 with proposed controller.The tracking performance of joint 1 with under controller in[48].The tracking performance of joint 2 with under controller in[48].The tracking performance of joint 1 with under controller in[49].

Fig. 10 .
Fig. 10.The results of tracking performance of joints with three different methods.

Fig. 11 .
Fig. 11.The results of tracking errors of joints with three different methods.

Fig. 12 .
Fig. 12.The NN weight norm of each joint.
Fig. 17.The estimation of external torque.