Active Collision Avoidance for Human-Manipulator Safety

This paper proposes a novel method for active collision avoidance to protect the human who enters a robot’s workspace in a human-robot collaborative environment. The proposed method uses a somatosensory sensor to monitor the robot’s workspace and detect anyone attempting to enter it. When someone enters the workspace, a Kinect detects and calculates the position of his or her skeleton points in real-time. However, due to the measurement errors and noise of the device, the tracking error increases over time. Therefore, the proposed method applies an improved particle filter (IPF) to accurately estimate the position of the skeleton points. In order to detect the human-robot collision in real-time, the proposed method uses cylinders to establish the bounding box model for human bones and robots and the human-robot collision is replaced by the collision between the cylinders, greatly improving the efficiency of collision detection. Moreover, taking human safety and productivity into account, the robot velocity control is carried out based on the distance between the robot and human. Then, the proposed method uses a rule-based logic system to analyze human motion so that the robot can take appropriate measures to avoid humans. Finally, the dynamic roadmap (DRM) approach plans new paths in real-time to allow robots to bypass humans. By actively avoiding collisions, the proposed method ensures that the robot will never touch the human body. The significant advantage of the proposed method is that it can detect humans in real-time, analyze their behavior and protect humans without any modification to the robot. The proposed method has been tested in practical applications, and the results show that it can successfully guarantee the safety of people entering the robot’s workspace.


I. INTRODUCTION
With the development of Industry 4.0, robots tend to be intelligent and collaborative in the future [1]- [3]. The humanrobot collaboration will bring wider use of robots and promote robots to become indispensable partners in human life [4]. However, the traditional methods of industrial production could not meet the needs of the enterprise's The associate editor coordinating the review of this manuscript and approving it for publication was Sanket Goel . production safety. Safety has become one of the most important factors in the future development of human-robot interaction. Although human-robot collaboration allows robots to do broader and more complex tasks, it raises the safety concerns of the robots. However, increasing robot safety often means a compromise in their performance, which requires designers to find a balance between safety and performance. At present, collaborative robots can perceive the environment and change their behavior accordingly. For example, when a robotic arm hits the human arm, the robot can perceive the existence of the human arm through the force sensor. Then it stops or goes away to protect human safety. This feature limits the performance of collaborative robots, such as speed, load, and so on. Therefore, an effective method is proposed to predict the possible collision between robots and humans, and timely feedback is provided to ensure human safety.
When a robot performs work in an unstructured, dynamic, and populated environment, finding a way to work safely and efficiently becomes more challenging [5]. Safe and efficient human-robot interaction is indispensable to future robotics applications, where humans and robots must collaborate to perform tasks jointly [6]. To meet the strictest safety requirements, robots must have an excellent ability to change motion paths in complex environments. In any human-robot collaborative system or any environment where humans and robots coexist, ensuring human safety at all times is a considerable challenge. In order to help humans in daily life, several studies have been performed to develop robot systems, which include interaction and cooperation with humans [7]. These studies indicated that robots are very useful in a variety of tasks performed in collaboration with humans. For this reason, finding a way to protect human operators in human-robot collaborative system is highly relevant. This not only includes detecting possible collisions in real-time, but also avoiding collision with humans during runtime.
Active collision avoidance in real time consists of three parts [8]: 1) Environment perception; 2) collision avoidance; 3) robot control. Collision avoidance is one of the most important fields in robotics. Many real-time capable planning concepts are based on the potential field methods that were introduced in two previous works. In these methods, virtual attractive and repulsive fields are used to represent targets and obstacles. In this case, the robot is attracted to targets and moves away from obstacles. Real-time planning algorithms are significant to robotic systems [9].
Despite many achievements in robotic safety, most realtime collision avoidance methods are either too expensive or have very limited security in practical applications. 1) Some methods, for example [17], are not practical in the real world because these methods require a dedicated set with built-in sensors, which can hinder the user's motion and increase cost in practical applications. 2) References [21]- [24] treat the human body as a moving obstacle, so the robot may mistakenly believe that the human body is the operation object and put people at risk. In addition, due to the lack of independent intelligence, the efficiency of collision avoidance is relatively low. 3) Current collaborative robots are relatively expensive, so replacing a large number of traditional robots with cooperative robots is not a realistic task. Moreover, cooperative robots tend to adopt post-collision detection, which inevitably puts people at risk. To address these issues, this paper proposeds an active collision avoidance method for human-robot incorporation.
In this proposed method, Kinect is used to detect humans in real-time; instead of particle filter (PF), the improved particle filter (IPF) algorithm for Kinect is applied to estimate the position of the skeleton joint accurately, thus improving the accuracy of human tracking. Moreover, robot velocity control is implemented to ensure the safety of humans while improving production efficiency. A rule-based logic system is developed to enable the collision avoidance system to obtain the most appropriate response strategy according to different behaviors of the human. When humans are immobile, dynamic roadmap (DRM) is used to plan new paths for the robot directly. When humans move slowly, the boundary box is calculated, and the robot bypass this box by using DRM to plan a new path. When humans move too fast, the robot immediately stops. Therfore, the proposed collision avoidance method can take different measures according to different human behaviors for more effectively and safely protecting humans. The main contributions are listed as follow: 1) Cylinders were adopted to build a bounding box model for both the human bones and the robots. Hence, collision detection can be achieved by detecting the relative position between the cylinders instead of the point cloud interaction in the depth image, thereby improving the timeliness and effectiveness of the collision avoidance method.
2) The proposed active collision avoidance method can analyze and predict human behavior and choose the best path for the robot so that the flexibility and intelligence of the robot movement can be improved. 3) In the proposed active collision avoidance method, an intelligent system with rule-based logic is proposed for analyzing human behavior so that the system can correctly respond to different human movement speeds, and the DRM was adopted to plan a new path, thereby improving the effectiveness and the independent intelligence of the proposed collision avoidance method.
The remainder of the paper is organized in sections. Section III provides an overview of the paper. Section IV presents details about the modeling of humans and robots, which is the basis of the paper. Section V describes the process of estimating skeleton position using improved PF. Section VI presents fast path planning using DRM. Section VII details robot velocity control and Section VIII describes active collision avoidance using a set of rules. The experiments and results are presented in Section IX. Discussion is given in Section X and the conclusion in Section XI.

II. RELATED WORK
Planning algorithms allow robots to avoid obstacles in real environments. Jiang et al. [10] described a planning algorithm that can adaptively coordinate humans and robots in the hybrid assembly system. Zhang et al. [11] proposed an autonomous path planning method based on improved rapidly-exploring random tree algorithm. Han et al. detected  obstacles in real-time and re-planed the path based on distance calculation and discrete detection [12].
Some studies have assessed obstacle avoidance in dynamic environments [13]. Most of them have treated operators as moving obstacles. This meant that they only need to consider collision avoidance of robots. References [14], [15] computed the distance between the human and the robot by analyzing images of the environment. Most of the collision detection methods cited above require that information about the environment must be available. Such collision avoidance algorithms are primarily based on the distance between the industrial robot and the detected objects in the real environment.
In human-robot collaboration systems, two approaches of avoiding collision have been used for safety: vision-based methods, which analyze motion, color and texture [16], and inertial sensor-based methods, which uses special suits for motion capture [17]. Currently, vision-based methods are a better choice for avoiding collisions in robotic control systems because in this way the operator does not have to wear additional equipment. Meanwhile, the surrounding environmental information, not just the motions of the operator, can be obtained. In addition, with the development of various 3D vision sensing system, some new depth cameras, like the Microsoft Kinect sensor [18], have facilitated the development of a powerful and easy to use sensor system.
In recent years, studies of the vision-based method have focused on increasing the efficiency of collision detection. One method involved a multi-camera system to detect obstacles [19]. Tan and Arai [20] used a triple stereo-vision system to track the movement of a sitting operator who was wearing a color marker. References [21]- [23] implemented collision detection and avoidance based on the depth images. Ahmad and Plappera [24] proposed an automatic path planning algorithm for robot manipulator, which located the unknown obstacles based on the time-of-flight (TOF) sensor.

III. OVERVIEW
Human safety can be questionable in environments containing working robots. In this paper, an active collisionavoidance method is proposed to protect the humans who enter robot's workspace. Fig. 1. Shows the flowchart of the proposed method. First, a Kinect, which has two infrared cameras for depth detection and one visual-spectrum camera for visual recognition, is used to detect the skeletons inside humans who enter the workspace. According to the position data of the skeleton joints of the human body from the Kinect, the human breaking into the robot's workspace can be detected even if the human is static. In this way, the risk that the robot mistakes humans for the object of operation can be reduced, thereby ensuring the safety of the human. After building bounding cylinders for humans and robots, collision detection is achieved by detecting the relative position between the cylinders instead of the point cloud intersections of the depth image, which improves the accuracy and effectiveness of the proposed method.
To improve the precision of the skeleton points detection, an IPF was used instead of the traditional PF for estimation. For the sake of the safety of humans while improving efficiency, the robot velocity control is implemented based on the distance between the robots and humans. Once the robotto-human distance is less than the safe distance, the speed of the robot begins to linearly reduce relative to the distance. Furthermore, in order to achieve real-time obstacle avoidance, the most commonly used strategy is to repeatedly plan the path, but this will greatly increase the avoidance time and the avoidance number, reducing the intelligence and flexibility of the system. Therefore, a rule-based logic system was developed to analyze the behaviors of the human so that the system could take different measures based on the current motion states of the human. When the human is static, the system plans a new path using DRM. If the human approaches slowly, the system builds a bounding sphere for the human and then plans a new path using DRM. When the human approaches quickly, the robot immediately stops. According to the current motion state of the human, the robot adopts different reaction strategies, thus improving the overall intelligence of the system. In this way, not only can the avoidance time of the robot caused by the repetitive planning path be reduced, but also the required cost can be reduced.

A. BUILDING BOUNDING CYLINDERS FOR HUMANS USING KINECT
The human movement tracking approach is driven by two key design goals: computational efficiency and robustness.
Nowadays, various depth cameras are widely used in human motion recognition as the position of the skeletal joints can be estimated by the depth image caught by these cameras. As illustrated in [25], a single input depth image is segmented into dense body parts with different probabilities and labels, which are used to estimate the 3D skeletal joints. In the proposed method, Kinect is used as a depth camera for extracting human motion data considering that the Kinect has a larger infrared sensor size and a wider field of view and produces a depth image with a higher quality relative to other depth cameras [26]. The Kinect also provides an application program interface (API) for human tracking and positioning. When someone enters the robot's workspace, the API can detect the human according to the characteristics of the threedimensional data and calculate the location of the human's skeleton.
As mentioned above, the joints of the skeleton can be located using Kinect. Fig. 2 shows the 15 skeleton joints in the RGB image. The 15 skeleton joint points are numbered from top to bottom and left to right. Cylinders are used to construct the human body based on the volume of the different parts of the body (Fig. 2). Pairs of adjacent points are used to build each cylinder. The distance between the two adjacent points was used to determine the radius and length of the cylinders, so accounting for the sizes of different persons. The cylinders are different for every part of the body. For example, the cylinder for the torso is bigger than the cylinder for the arm. The scaling relationship between the size of a cylinder and the length of the corresponding two adjacent points can be calculated using statistical data.

B. BUILDING BOUNDING CYLINDERS FOR ROBOTS WITH DH PARAMETERS
The standard Denavit-Hartenberg (D-H) [27] convention is the most widely used means of describing robot kinematics [27]. The position of the robot and its tools and their orientation are defined according to the controller conventions. Through consecutive homogeneous transformations from the base coordinate to the robot tool coordinate, the kinematic equation can be defined as follows: is the translation matrix from the i − 1 coordinate to the i coordinate, N is the number of joints (or joints coordinates). The position of each joint j can be calculated using Eq. (1).
Because the size of each link can be obtained from factory parameters, the radius of the cylinder of each link can be determined. The length of the bounding cylinder of link j is the distance between the position of joint j − 1 and j. Then, the cylinder of each link can be built after the radius and the length is determined (Fig. 3).

C. COLLISION DETECTION FOR CYLINDERS
Because both the robot and the human are bounded by cylinders, using cylinder-cylinder collision detection can efficiently approximate the collision case between a human and a robot. Given two colliding cylinders A and B, the relative positions between them could be classified into three categories according to whether the cylinders intersect or not at their bases, as shown in Fig. 4. The case in which one base of A interact with one base of B is called base-intersection ( Fig. 4 (a)). The case in which the two cylinders intersect only at the sides is called lateral-quadrature ( Fig. 4 (b)). And the case in which one base of one cylinder intersects with the side of another cylinder is called lateral-heterotrophic (Fig. 4 (c)). For details about collision detection for cylinder-shaped bodies, please refer to [28]. VOLUME 10, 2022

V. HUMAN TRACKING USING IMPROVED PARTICLE FILTER A. IMPROVED PF
Although the human skeleton position can be measured by the Kinect, the inherent noises of the sensor can increase over time. To reduce the measurement errors and noise, an improved PF was used to improve the reliability of the proposed methodology. The PF algorithm, which is a state estimator [29], can be used to approximate the posterior with a finite number of state samples that have corresponding normalized weights. At time t k , the approximation of posterior density can be defined as follows [29]: where x i,k is the position state of the i th particle at time t k , z i:k represents the observations, N is the number of samples, w i,k is the normalized weight of the i th particle at time t k and δ(·) is the Dirac delta function. As in one previous study [30], [31], an ensemble Kalman filter (EnKF) is used to approximate the posterior probability function (PDF) of state variables can be calculated as follows: where w k is the model error and Q k−1 represents the covariance of the model error. The analyzed particles can be calculated as follows: where v k is the observation error, R k represents the covariance of the observed error and h(·) is the measurement matrix. The Kalman gain K k can be obtained as follows: According to the literature [32], the weight updating formula is calculated as follows: where r x i,k |x i,k−1 , z k is the importance density.
When (7) can be expressed as: To avoid particle degradation, particles must be resampled and then the same weight (w i,k = 1/N ) is assigned to all the particles. A Markov Chain Monte Carlo (MCMC) method [32] is used to improve the diversity of particles after resampling. The transition process is implemented as follows: where k(x * k |x k ) is the Markov transition kernel. In the Metropolis Hasting algorithm, the resampled particles move to the proposed particle only if u ≤ a, which are defined as After the MCMC step, the new particles are more diverse because they have a distribution closer to the posterior PDF.

B. ESTIMATION OF SKELETON POSITION USING IMPROVED PF
In the proposed method, the human skeleton position is measured by the Kinect and an improved PF is used to estimate the position. Each particle position has three x i IPF states (x i IPF,k = p x p y p z ). The deterministic input is set to zero since there is no deterministic input to the system. The accumulated position difference of the i th particle at the s th position iteration as follows: Here, M s = T s /t and p i p−axis,k is the position state of the i th particle and p i k−axis,k is the position state of the i th particle estimated by EnKF at time k. Through the JD i s values, the weight of each particle would be recalculated in every T s period. Then, JD i s should be adopted in the IPF instead of the direct measurements [33]. Therefore, the approximate posterior is represented by: Then, the following equation is satisfied: and the weight of the i th particle [33] can be expressed as: By picking the important density and resampling, the weight of the i th particle can be obtained as follows:

VI. FAST PATH PLANNING USING DRM
In dynamic environments, the angles between joints and the degree of rotation will change a lot. The motions of the robot in three-dimensional space are very complex and may be occluded at any time considering that obstacles are mostly dynamic. In this way, obstacle avoidance is necessary for the robot to work in changing environments. In this paper, dynamic roadmap (DRM [34]), an adaptive real-time path planning method that incorporates a pre-processing stage and a planning stage, was adopted for obstacle avoidance. DRM uses pre-computation to increase online efficiency by generating a pre-processed roadmap. For a given time, DRM can quickly determine which parts of the roadmap are blocked before each planning step. An enhanced DRM can be produced through a pre-processing stage and a planning stage [35].
In the preprocessing stage, the first step is to build undirected roadmaps in the q-dimensional configuration space of the robot, in which q is the number of robot joints. The second step is to look for conflict points and edges in the roadmaps. First, the nodes of the roadmaps were created using a uniform sampling method. Then, for each node, according to pseudo-norm space, its k nearest neighbors were identified provided that the segment between the two nodes was collision-free and connected as part of the roadmap. The discrete configurations at roadmap nodes and along edges for self-collision were checked by using a related algorithm. The direct workspace metric was defined as follows: Here, A is the set of all reference points on the surface of the robot and a(p) is the location of reference point a in the workspace if the robot is in configuration p. Eq. (16) provides the maximum degree to which any reference point may be displaced between two checked configurations. If the robot is bounded by a convex polyhedron and all of its vertices are selected as reference points, the metric gives the maximum displacement of any point on the robot. It is to be noted that self-collision points and edges must be removed when creating the roadmap. For this reason, after creating the roadmaps, in order to look for conflict points and edges, the DRM method was used to create a map of the area between the workspace grid and the configuration space roadmap. The planning stage involves the following steps: invalidating blocked parts of the roadmap, connecting start, goal configurations to the roadmap and searching the graph.

VII. ROBOT VELOCITY CONTROL
In a human-robot collaborative environment, human safety can be difficult to guarantee.
In order to ensure the safety of humans while improving the efficiency of the robot in the process of human-robot cooperation, the movement speed of the robot is determined based on the distance between the robot and the human. When the present distance S present is less than the safety distance S safe , the speed of the robot decreases linearly with S present . When the present distance S present is less than the minimum distance S min , the robot stops working. As shown in Fig. 5, according the distance between the robot and the human, the zone is divided into three regions including the safe zone, the acc/dec zone and the stopping zone.
In the proposed method, the robot takes different speeds based on the distance between the robot and the human. The allowable speed of the robot is defined in the three areas (Fig. 5) as follows [36]: The safe distance S safe is given by: where V h and V r represent the speed of the human and the robot, respectively. T r , T b, and S b denote the collision avoidance system reaction time, the braking time with respect to V r and the braking distance of the robot, respectively. C and Z represent the resolution distance of the sensor and the uncertainty distance based on the human and the sensor. V max is the maximum speed of the robot under the maximum load. The minimum distance S min is defined as: From Eq. (17), When the distance between the robot and the person is greater than S safe , the speed of the robot V R remains static. When the distance is greater than S min and less than S safe , the speed of the robot mainly depends on the speed of the robot and the person and the braking distance. When the distance is less than S min , the robot stops moving to ensure safety.

VIII. ACTIVE COLLISION AVOIDANCE USING RULE-BASED LOGIC SYSTEM
Since human behavior is unpredictable, real-time path planning alone cannot achieve intelligent obstacle avoidance. Our VOLUME 10, 2022 collision avoidance system uses a knowledge-based set of rules for decision making [36]. The knowledge-based contains rules that are usually expressed in the form of ''IF A THEN B,'' where A is the antecedent conditions and B is the consequences.
In the proposed approach, when operators enter the robot's workspace, that is, when the distance of the operator from the robot D HR is lower than the dangerous distance D HR_min (D HR_min refers to the maximum working range of the robot and its tool), the collision avoidance system is triggered. According to the distance of the Kinect from the industrial robot, the maximum detection radius R max of the sensor can be determined. From the above section of the robot velocity control, the safety distance S safe represents the distance for which the robot can avoid people in time. Let S safe = R max , and formula (18) can be rewritten as follow: where V max represents the maximum speed of the robot under the maximum load. T r , T b | Vmax, and S b denote the collision avoidance system reaction time, the braking time with respect to V max and the braking distance of the robot, respectively. C and Z represent the resolution distance of the sensor and the uncertainty distance based on the human and the sensor. Once the allowable maximum speed V max of the robot is determined, the allowable maximum speed v H _danger of the people can be obtained. Furthermore, the final speed v H _danger is determined by simulating the robot obstacle avoidance in the simulation environment. To achieve active collision avoidance, a set of rules is formulated in three cases where a human may enter the robot's workspace: Case 1: The human is approaching at a fast pace. When the human gets close to the manipulator at the speed v H > v H _danger m/s (where v H _danger is a dangerous speed), the system cannot ensure the new planned path is safe during the next iteration. The wisest choice is to avoid the human rather than plan a new path right away. When the human gets close to the manipulator in the direction of D H , the manipulator should avoid the human in the direction of D H so that they do not collide.
Case 2: The human is slowly approaching the robot. When the human gets close to the manipulator at the speed (0 < v H < v H _danger m/s), the system predicts the motion trail of the human and plan a new path to avoid the human using DRM. Because of the randomness of human movement, the system calculates a bounding sphere that should contain all possible motion trails in a period (including cases in which the human suddenly stops). Then the robot should plan a new path to avoid the calculated bounding sphere rather than the human during that period. When the human suddenly accelerated (v H _ > v H _danger m/s), the system should respond as in case 1.
Case 3: The human is immobile. When the human stands near the robot, the system judges whether the human impedes the robot's movements. If so, the system plans a new path using DRM to avoid him or her. In this case, the robot does not need to avoid a bounding sphere. Instead, the system can plan a shorter, more efficient path. When the human suddenly moves at the speed v H _ > v H _danger m/s, the system should respond as in case 1. Otherwise, the system should respond as in case 2 when the speed 0 < v H < v H _danger m/s. The rules can be expressed as follows: THEN P R = P New,Static where P R is the next position of the robot. P New,Bounding is the new plan calculated by DRM. With this plan, the robot can avoid the bounding sphere. P New,Static is the new plan calculated by DRM, with which the robot can avoid the human.

A. EXPERIMENTAL ENVIRONMENT
A series of experiments were performed to assess the effectiveness of the proposed method. The time requirement and efficiency of the proposed method were compared to those of previous methods [19] and [21]. During the experiments, the proposed active collision avoidance system was executed on an eight-core CPU, of which four were for the visualization and the robot movement control and the other four were for the complex position calculation and planning path.
In the proposed method, a GOOGOL GRB3016 robot was used to finish the experiments. The link parameters in D-H model of the robot are listed in Table 1. In the robot control system, a working path was designed, and the robot was made to move along this path repetitively by using the reverse kinematics algorithm [37]. An emulation scene including the robot manipulator model and the human model was built into the robot control system. In order to locate the 3D positions of the human and the robot, a position measurement system with a Kinect and a calibration board was designed. The calibration board was tightly attached to the robot near its base and the Kinect was firmly fixed to a tripod, which was placed 2.1 meters away horizontally, 1.6 meters away vertically and 1.6 meters height relative to the robot base (see Fig. 6). The resolution of the Kinect depth images was 640 × 480 and the capture frequency was 30 Hz. In order to determine the position of the robot and the human, the angle  of the Kinect was adjusted before the experiments, so that the human skeleton and the calibration board can be captured in real-time. Suppose that (x k , y k , z k ), (x b , y b , z b ) and (x c , y c , z c ) are the frames of the Kinect, robot base and the calibration board respectively. The relative location between the robot base frame X B Y B Z B and the calibration target frame X C Y C Z C could be measured offline using a ruler. A method described in previous work [38] is used to calculate the relative location between the calibration target frame X C Y C Z C and the Kinect frame X K Y K Z K . Then, the Kinect frame X K Y K Z K , with respect to the robot base frame X B Y B Z B , could be determined using a calibration target (Fig. 7). The transformation from the robot base to the calibration board should be measured and determined before any experiments begin. During the experiments, the positions of the human were obtained by the depth information from the Kinect. By capturing the images of the calibration board with the RGB camera in the Kinect, the position of the robot could be determined by detecting the corners of the calibration board. The goal is to allow the robot to avoid the human who has entered its workspace. In the experiments, to ensure the safety of the human, the robot velocity control is introduced, and the movement speed of the robot is determined based on the distance between the robot and the human. When the distance is less than the minimum distance S min , the robot would stop working. Moreover, in order to further protect the human from the accidental injuries in the event of an experimental failure in the experiments, we set the force threshold to 50 Newton which is less than the maximum permissible forces of various part of the body as illustrated in the Collaborative Robot Collision Standard ISO/TS 15066. When the detected force is greater than the set value, the robot stops working. In this way, when incidental contact between robots and humans occurs, contact shall not result in pain or injury. The parameters used for the experiment were as follows: v H _danger = 0.2 m/s, D HR_min = 0.2 m.
During the experiments, the robot moved all the way along the designed path and the position of the robot EE with respect to the Kinect frame was calculated by the robot control system in real-time. When a human enters its workspace, the skeleton of the person was detected immediately by the Kinect. With the captured human skeleton, the position of the human related to the Kinect frame and the speed at which the human moves could be determined by the robot control system. Once the system determined that the human was approaching the robot, the active collision-avoidance system performed collision avoidance according to the speed of the human's movements. Five testers were invited to carry out the experiments to verify the efficiency of the proposed method. The testers are aged from 20 to 26, four males and one female. Note that the testers were asked to approach the robot at different speeds. Firstly, the testers moved toward the robot at a relatively high speed which was more than 0.2 m/s. Then, the testers approached the robot at a speed between 0 and 0.2 m/s. Finally, the tester moves away from the robot. Each experiment was repeated three times, for a total of 15 times.

B. EXPERIMENTAL RESULTS
In the experiment, the testers attempted to interrupt the robot's work. Fig. 8 (a) shows the experimental scene. In order to monitor the robot's environment, a threedimensional virtual scene was constructed as shown in Fig. 8 (b). During the initialization of the virtual scene, there was only one virtual robot. When the Kinect sensor detected a human, the sensor calculated the position of his or her skeleton in real environment and then a 3D skeleton and a 2D skeleton were drawn in the virtual scene ( Fig. 8 (b)) and on the 2D image (Fig. 8 (c)), respectively.   equal to the approach component of the human, as shown in Fig. 10. The pink dotted line is the movement trajectory of the human, the blue solid line is the new path of the robot, and the red line is the old path of the robot. In the beginning, as the human was approaching the robot quickly, the system found that his or her speed exceed v H _danger . Then the robot tried to avoid the human directly. When the human slowed down, the system calculated a bounding sphere to plan a new path for the robot.   3.95 s (P R2 ). Then, when the human began to move away from the robot, the robot returned to its previous path (P R3 ).
In the experiment, the proposed method was compared with method [19] and method [21]. When a human is approaching the robot at a faster speed, the robot moved to avoid him directly, while in these previous methods, the robot realizes obstacle avoidance by repeatedly planning a new path. When the speed of the human slows down, the robot avoids him or her by avoiding the calculated boundary sphere, while in methods [19] and [21], the robot still avoids humans by repeatedly planning new paths. Fig. 12 shows the comparison of the proposed method and other methods in the robot moving trajectory.   13 shows the comparative results in X, Y, Z. Compared with methods [19] and [21], in the X direction, it is obvious that the path of the proposed method is most close to the default trajectory, which is nearly the same as the default trajectory. And in the Y and Z direction, though three methods' paths all have clear differences with the default trajectory from 7 th sec to the 10 th sec, we can find that the difference  between the proposed method and default path is minimum. In general, the path of the proposed method is most close to the default trajectory in a three-dimension space. It is because that the proposed method can correctly respond to different human movement speeds by analyzing the human's behaviors and choose the best path for the robot so that the flexibility and intelligence of the robot movement can be improved. Fig. 14 shows the comparisons of the proposed method, method [19] and method [21] in terms of the avoidance time, the number of avoidance events and the increased route. Table 2 lists the statistics for the data in Fig. 14, including the average, minimum and maximum values of the five testers using the three methods. In the proposed method, the system had to move to avoid the human 3.4 times on average, which collectively took 4.76s on average. In the TABLE 2. Comparisons of the proposed method, method [19] and method [21]. method [19], because the system planned a new path for the robot repeatedly, the robot had to move 8.4 times on average to avoid the human and spent more time doing it, at an average of 11.52s. In the method [21], the average number of avoidance events and average avoidance time are 5.2 times and 6.72s, respectively. Because the robot moved in a new path to avoid the human, the route was longer than the initial one. In the current method, the path is increased by an average of 783.2 mm, and the paths that were built by method [19] and method [21] are 1173.2 mm and 937.4 mm, respectively.
Moreover, a serial of statistical tests is conducted. The statistical test data of the paired t-test are shown in Table 3. All the two-tail p-values are less than alpha level (0.05), which indicated that the proposed method has better performance than method [19] and method [21] significantly in our experiments.  [19] and method [21].
When the human slowed down, the proposed method calculated a bounding sphere for the robot so that it could avoid the human quickly, which saved a considerable amount of time. However, the previous methods constantly calculated a new path for the robot so that the robot could move around the human. Although the increase in the length of the route was less pronounced using the previous methods, avoidance time and avoidance events increased significantly. It can be concluded that the proposed rule-based logic system can formulate reasonable response strategies according to different behaviors of people. In addition, the establishment of a cylindrical bounding box model for human bones and robots helps to improve the timeliness and effectiveness of the collision avoidance system.

X. DISCUSSIONS
For the sake of safe interaction, human beings and robots are usually required to work in separate spaces. In current industrial applications that lack sensor surveillance, robot work cells must be static. If a human enters the robot's work cell, some additional strategies must be used to ensure the safety of both the human and the robot. Existing applications for collaboration between robots and humans are relatively common. These applications allow robots to work in less space in situations in which there are no physical barriers or in which no humans are involved. In complex environments, humans may be mistaken for operation objects, like workpieces. Robots' lack of independent intelligence can place humans in danger. This paper uses Kinect to detect humans who entered the robot's workspace and the IPF is applied to improve the accuracy of estimating the skeleton position. By building bounding cylinders for the human and the robot, the collision detection between the human and the robot can be achieved by detecting the relative position of the cylinders, which can improve the effectiveness and accuracy of the system. In order to make a reasonable reaction strategy for the obstacle avoidance system based on human behaviors, a rule-based logic system is proposed, and DRM is applied to plan a new path to avoid humans. When humans are immobile, DRM is used to plan a new path for the robot directly. When the humans are moving slowly, a bounding sphere is calculated, and the robot bypasses it by planning a new path using DRM. When the human is moving at a fast pace, the robot stops working immediately. By taking different measures, the system was found to actively protect humans.
In future work, voice control will be added to the system so that the robot can talk to any human who enters its workspace. The robot can be made more clearly aware of humans' intentions. This may make avoidance more efficient.

XI. CONCLUSION
This paper proposes an active collision avoidance method for robots using the Kinect sensor and the IPF. This proposed method can effectively detect human breaking into the workspace of the robots. In the proposed method, the collision detection is achieved by detecting the relative position of the bounding cylinders of the human and the robot. To take different reaction strategies according to human movement, a rule-based logic system was proposed to analyze the behavior of the human and DRM was used to plan a new path for the robot to avoid the human. Finally, experiments were performed to show the effectiveness of the proposed algorithm. Experimental results demonstrated that this proposed method could be applied to practical applications such as protecting the behaviors of humans who enter a robot's workspace in a human-robot collaborative environment.