Research on Cobot Action Decision-Making Method Based on Intuitionistic Fuzzy Set and Game Theory

The bounded rational properties of humans in human-robot collaboration (HRC) is a fundamental reason for collisions in proximity HRC. As HRC scenarios in manufacturing become increasingly popular, robot action decision-making needs to consider such properties of humans. In previous studies, humans are usually regarded as rational agents whose behaviors are predictable and planned. Still, humans are susceptible to distractions caused by external disturbances, and different cognitive processes of the task can produce unpredictable behaviors. To better simulate human bounded rational behavior, based on an intuitionistic fuzzy (IF) multi-attribute decision algorithm, we propose a cobot action decision-making method that integrates human intention, safety, and efficiency, to produce a human-like decision. We use the IF set to calculate the score and accuracy values of two Nash equilibria of a static chicken game, which can predict human action intentions with collision risk and provide optimal action decisions for the robot simultaneously. We generated 10,000 sets of data using the Monte Carlo method and validated the effectiveness of our proposed method by comparing it with MDP and POMDP methods. The results showed that the decision-making method could effectively perform the task of making action decisions for the robot. Simulation experiments and Turing test results show that our proposed method can predict a human’s subjective action decision intention in a situation with potential collision risk with 85.62% accuracy. At the same time, the experimental participants believe that the robot can get 4.83 out of 5 satisfaction points for an action decision.

human intention of the assessment is often vague and proba-82 bilistic, which is because human is not fully rational intelli-83 gence. The judgment and decisions made by the human are 84 often subjective and bounded rational, and when the human 85 and machine work in proximity in scenarios with potential 86 collision risk. The frequent judgment of collision risk will 87 significantly increase the cognitive load of the human work, 88 resulting in errors in judgment leading to increased collision 89 risk. Therefore, only the human-like decision-making action 90 of cobots can reduce the cognitive load of human judgment 91 on cobot behavior, reduce human psychological pressure, and 92 improve the efficiency of both the human and cobot while 93 ensuring safety. 94 For a proximity HRC scenario such as Figure 1, a rapid 95 decision must be made for the cobot's action. Most impor-96 tantly, the decision at this point needs to be based on an 97 accurate judgment of the human's action intention. Previ-98 ous studies on similar scenarios [7], [8], [9] proposed using 99 methods such as game theory to make decisions about the 100 cobot's actions. However, they were based on the other party 101 being a fully rational agent, which is against the bounded 102 rationality characteristic of humans in reality. Although some 103 studies have investigated the bounded rationality problem 104 [10], [11], they have not studied the scenarios with potential 105 collision risk in extremely proximity HRC scenarios. In the 106 scenario of proximity HRC, human thinking time is concise. 107 Most decisions are made through experience and intuition, 108 with a vital characteristic of limited rationality. Moreover, 109 if accurate judgments of human intentions cannot be made, 110 safer action decisions cannot be made for the robot, and trust 111 in the robot cannot be improved. 112 There is a great need for a method to measure human 113 fuzzy decision intentions and formulate decisions to solve 114 the cobot decision problem to improve cobot efficiency and 115 safety, reduce human cognitive load and enhance comfort and 116 trust. 117 Statistical methods have analyzed most studies on the prob-118 lem of prediction of human intentions and then expressed 119 them in the form of probabilities. However, in real scenarios, 120 human perceptions of such issues are vague and can only give 121 vague ranges, which cannot be expressed in a definite given 122 lv number. In contrast, in the scenario presented in this paper, 123 human intentions expressed in probability can only reflect 124 whether two intentions are passed or not. In reality, humans 125 will have a large proportion of hesitation states, which cannot 126 be expressed in probabilistic form, which results in misjudg-127 ment of human intentions and low satisfaction with robot 128 action decisions. On the other hand, the IF set uses the 129 subordination and non-subordination functions to express the 130 intention of passing well, not passing and hesitating that 131 appears in the human at this time, which provides more 132 reference for the development of the robot's action decision. 133 The decision is made according to the size of the hesita-134 tion state of the person at this time. For example, the cobot 135 decides to pass when the hesitation intention is significant to 136 improve efficiency, and vice versa, not to pass to ensure the 137 person's safety and ultimately enhance the satisfaction of the 138 person. 139 In addition, in the scenario studied in this paper, improving  The main contributions of this paper are as follows:

193
This section presents the application of game theory to robot 194 motion planning and collision avoidance decision-making 195 problems. Different solutions to the issues of collision avoid-196 ance and interaction-aware modeling in HRC have been given 197 in autonomous driving and robot motion planning. In par-198 ticular, the interaction-aware modeling problem is increas-199 ingly attracting the attention of researchers in socially aware 200 robot navigation [7], [12]. Interaction-aware modeling is 201 the basis for solving HRC decision-making problems, and 202 decision-making methods model the relationships between 203 agents, actions, environments, and tasks [13]. Decision-204 making selects the best move for the robot based on the payoff 205 calculated from the utility function of each job.

206
Probabilistic methods, deep learning, and game theory are 207 among the most widespread decision-making methods. For 208 probabilistic methods, Markov decision processes (MDP), 209 Bayesian processes, and graph theory are some of the most 210 widely used methods. For instance, Roveda [10] et al. use 211 Hidden Markov Models (HMM) to teach the robot how to 212 achieve the task based on human demonstrations and use 213 Bayesian optimization-based algorithms to maximize task 214 performance.

215
However, probabilistic models do not satisfactorily deal 216 with bounded rational behavior and uncertainty problems. 217 Many studies have addressed similar issues using deep 218 learning or reinforcement learning approaches to enable the 219 correct processing of boundedly rational behavior. For exam-220 ple, Roveda [10] et al. used partially observable Markov 221 decision processes (POMDP) to develop a framework for 222 planning collaborative robot tasks in assembly, considering 223 both the designer and operator's intents. The designer's CAD 224 data automatically derives a set of potential assembly plans 225 and translates it into a state graph from which the operator's 226 intentions follow. However, the drawbacks of deep learning 227 and reinforcement methods are the computation cost and slow 228 learning speed. Game theory methods in decision-making 229 processes have only recently been exploited. They can model 230 most of the tasks of a group of agents (players) in collabora-231 tion or competition. Game-theoretic methods have been used 232 in different HRC applications. For example, Gabler [8] et al. 233 proposed a game-theoretic-based action selection framework 234 for HRC that allows robots to select appropriate actions based 235 on the behavior of their human colleagues during proximity 236 collaboration. The proposed framework models the HRC 237 scenario as a non-cooperative game model and selects action 238 strategies for the robot by the Nash equilibrium results. The 239 framework selects the optimal trajectory from the action set to 240 assign to the robot, completes the work, and avoids collisions. 241 However, the research considers people to be fully ratio-242 nal when building game models, which has a large gap 243 from the actual situation. People often make decisions with 244 bounded rationality in real work scenarios due to their 245 personality, work environment, and fatigue. Similar     In our paper, we focus on the following three aspects: 1.

302
In the process of proximity HRC, there is no optimal path for 303 FIGURE 2. The framework of the proposed cobot action decision making method based on IF set and game theory.
avoidance or inconvenient avoidance for human-cobot simul-304 taneous convergence to the same target. 2. Human action 305 intention under the combined effect of efficiency require-306 ments, subjective risk perception, and irrational factors. 3. 307 Robot integrated efficiency, safety, and human action inten-308 tion to make the optimal action strategy.

310
This section outlines our proposed cobot action decision-311 making method based on the IF set and game theory. First, 312 we outline the scenario and object to be studied, simplifying 313 the scenario to a static chicken game and treating the human 314 and cobot as agents. Second, the action decisions of the 315 human and cobot are modeled. We established four IF sets, 316 the IF set of human for efficiency, the IF set of human for 317 comfort, the IF set of cobot for efficiency and the IF set of 318 cobot for safety. Thirdly, the IF set of human action intention 319 is established by integrating the IF set of human for efficiency 320 and comfort. The IF set of collision avoidance is found by 321 integrating the IF set of human action intention and the IF set 322 of cobot for efficiency and safety. Finally, the cobot collision 323 avoidance decision-making method is established by calcu-324 lating the exact value of the IF set of the Nash equilibrium 325 solution. The cobot action decision method framework based 326 on the IF set and game theory is shown in Fig. 2.

328
Our focus is similar to the scenario shown in Fig. 1, where 329 a person and a cobot tend to grasp parts towards the same 330 target in a tight space simultaneously. Due to the extreme 331 reaction time, it is impossible to communicate between the 332 human and cobot through language to plan the sequence of 333 actions between the human and cobot in actual HRC scenar-334 ios. Such scenes also cause a potential collision risk between 335 the human and the cobot. Moreover, it is impossible to avoid 336 collision by optimizing the cobot's trajectories due to space 337 constraints, and one of two agents must adopt a temporary 338 yielding strategy to avoid a collision. 339 Previous experiments found that people prefer to accelerate 340 through potential collision areas to improve efficiency when 341 working in a seated position or in a state where they cannot 342 move their current position at will. And it is interesting to 343 note that in actual experiments and HRC scenes, humans also 344    The strategies adopted by both game agents A 1 , A 2 repre- to portray fuzziness [19]. It can simultaneously represent 393 three states of support, opposition, and neutrality, which can 394 describe the natural properties of objective phenomena more 395 delicately and comprehensively, so IF set are widely used in 396 economic management decision problems. We introduce the 397 basic concepts of IF set in the following.

398
Definition 1: Let X be a universe if there are two functions 399 on X , i.e., µÃ : X → [0, 1] and νÃ : define the degree of membership and the degree of 403 non-membership of an element x ∈ X , such that they satisfy 404 the following conditions: Then µÃ and νÃ determine an intuitionistic fuzzy set on 407 universe X , which can be abbreviated as: Definition 2: (Trapezoidal Intuitionistic Fuzzy Number): A 410 trapezoidal Intuitionistic Fuzzy Number(TIFN) denoted by, 411 a = (a, a 1 , a 2 ,ā) ; wã, uã is a special IF set on a real num-412 ber set , whose membership function and non-membership 413 functions are defined as follows: where wã, uã denote the maximum and minimum member-418 ship degree ofã respect, such that they satisfy the conditions 419 is called the measure of uncertainty.

422
Definition 3: 423 The sum of IF sets: The product of IF sets:  478 Therefore, we establish a model of human cognition of 479 efficiency based on the CPT and establish the IF set of human 480 for efficiency. Assume that the process of the human alone to 481 complete the assembly task, the time t s required for the maxi-482 mum speed of 1.8m/s is recorded as the maximum efficiency, 483 at which time the efficiency is 100%. The time t 0 needed 484 to reach the target from the current position at the current 485 speed is the reference point when the efficiency is t s t 0 . 486 Suppose one's perception of efficiency is entirely rational, 487 and the result of decreasing efficiency with increasing arrival 488 time is well understood. In that case, i.e., one maintains a 489 neutral attitude toward the relationship between efficiency 490 and speed, we construct the expected utility function of the 491 neutral attitude with these two efficiencies.
Based on the expression for the value function in the CPT, 494 we determine the value function of efficiency as perceived by 495 the person.
(2) 497 The decision weight function as perceived by the person: In Eqs.
(2) to (4), α and β reflect the level of risk preference 501 of the decision-maker. In the value function f (t), the smaller 502 the value indicates, the higher the decision maker's sensitivity 503 to risk and p denotes the probability of reaching the target at 504 time t. λ is the loss aversion coefficient, and when λ > 1, 505 one values losses more than gains. t l is the maximum time 506 corresponding to the time when f E (t) =0. The w + (·) and 507 w − (·) represent the values of the decision weight function in 508 the gain and loss regions, respectively. The decision weight 509 function is inverted ''S'' shaped, as shown in Fig. 3.

510
The smaller the parameters γ , δ (0 < γ , δ < 1), the more 511 curved the function shape is, and the more decision-makers 512 tend to overestimate small probability events and underesti-513 mate significant probability events.

514
According to the CPT, the prospect of a person approach-515 ing the target area at the current speed, i.e., the model of 516 human perception of efficiency, can be expressed as:  are as follows: The cobot is a fully rational intelligent unit, so the IF 529 set of cobot for efficiencyẼ R = µ R (t) , υ R (t) mem-530 bership and non-membership functions of the cobot are as 531 follow: The IF set of human and cobot for efficiency are shown in 535 Fig. 5 and 6, respectively.

537
The human perception of collision risk is different from the 538 cobot. The human perception of collision risk depends not 539 only on the relative distance and speed of the human and 540 the cobot but also on a variety of subjective factors such as 541 the human's perception of the cobot's appearance, experi-542 ence, and trust. The direct use of relative distance and speed 543 cannot accurately measure the human perception of collision 544 risk. Our previous study established a psychological safety 545 field(SE P ) model to calculate the psychological impact of a 546 cobot approaching different human body parts with a certain 547 speed, minimum separation distance, and direction. When the 548 psychological safety field strength is large enough, a person 549 perceives that they will choose to avoid the cobot.

550
The literature [22] and our previous research [23] found 551 that the acceptable cobot motion speed range is between 552 0.3m/s and 1m/s, and the range of speed considered com-553 fortable is between 0.5m/s and 0.8m/s. At this time, the 554 psychological safety field strength SE Ph can be calculated 555 VOLUME 10, 2022  comfort, and they usually can only give fuzzy conclusions 603 intuitively. Therefore, the IF set of various factors are given 604 different weights, and then the IF set of various factors 605 are weighted and aggregated by the weighted aggregation 606 method.

607
Since the human perception of the ranking of efficiency 608 and safety is vague, it is impossible to accurately deter-609 mine the importance ratio of efficiency and safety. Therefore, 610 to be closer to the real situation, we randomly interviewed 611 five team members and collected their perceived importance 612 weights for efficiency and comfort. The IF set of human for 613 efficiency and comfort are weighted together to obtain the 614 IF setK = µ K (x) , ν K (x) of human action intention on 615 whether to continue moving towards the target through the 616 potential collision area for the current situation.
From the above analysis, it is concluded that the cobot 619 needs to consider three factors: safety, human action inten-620 tion, and work efficiency to consider whether to use the 621 original action of continuing to move towards the target or 622 choosing the strategy of temporarily yielding. For this rea-623 son, to ensure safety, we first set the weights of the three 624 factors as w S = 0.95, 0.05 , w K = 0.8, 0.1 and w E R = 625 0.6, 0.2 respectively. The IF weights w E w SE w S w K w E R 626 in the paper can be obtained by sensitivity analysis to get 627 the range of attribute weights changes when the ranking 628 of decision options' advantages and disadvantages is kept 629 constant. Using the sum and product of the IF set, the action 630 decisions made by the cobot considering the three factors 631 together are weighted and assembled to obtain the IF set of 632 collision avoidanceC = µ C (x) , ν C (x) .

633
When a person passes through the potential collision area 635 and the cobot temporarily yields, we record the IF set of 636 collision avoidance asC C = µ C C (x) , ν C C (x) ; The oppo-637 site strategy is recorded asC K = µ C K (x) , ν C K (x) . It is 638 necessary to give the ranking method of intuitionistic fuzzy 639 sets to judge the size. decision-making algorithm proposed in this paper, i.e., w E = 678 0.6, 0.3 w SE = 0.5, 0.4 . We set the discount factor γ d at a 679 fixed value in the MDP to a dynamic value, assigning a weight 680 of 0.6 to safety S M and a weight of 0.4 to efficiency E M . 681 Since no bounded rationality factor is involved in the MDP, 682 security is used as the probability in the state transfer matrix. 683 The decision is obtained by solving the MDP using the value 684 iteration method. The decision is obtained by solving the 685 POMDP using the Q-Learning algorithm.
We are particularly concerned with the following two 690 scenarios. 691 1. When the cobot can reach the potential collision region 692 at the original speed, the human arrives after the cobot reaches 693  conservative and aggressive, respectively, and the POMDP 723 method is in between. To verify whether the action decisions 724 made by our proposed method can improve efficiency and 725 safety, we compared the efficiency and safety of the three 726 methods for 10,000 experiments. To show the efficiency and 727 safety more clearly in the figure, we averaged every 100 sets 728 of data. We then plotted them in the figure to reflect the 729 efficiency and safety comparison of the three methods. The 730 experimental results are shown in Fig. 9. The results show 731 that the MDP method is less efficient than our method and 732 the POMDP method, while the efficiency of our method is 733 slightly higher than that of the POMDP method. The three 734 methods do not differ much in terms of safety, and the 735 MDP method has the highest safety. In many scenarios, the 736 cobot action decisions made by MDP yield, while the safety 737 obtained by our method is similar to that of the POMDP 738 method.

739
The effect of our method on the weighting factors can be 740 determined by calculating the sensitivity of the weighting 741 factors to determine within what range changing the weights 742 will not change the ranking scheme of the IF set.

743
The results of the 10,000 sets of experiments generated 744 by the Monte Carlo method show that our method can 745 improve the smoothness and productivity of HRC and the 746 safety of HRC in the overall experimental process compared 747 to the MDP series method, which verifies the effectiveness of 748 the algorithm. We designed simulation experiments to verify 749 the accuracy of the human action intention model and human 750 satisfaction with cobot decisions in real scenarios. 751 We present some of the experimental results of the Monte 752 Carlo simulation experiments made by our method in the 753 article's attachment.

755
To verify the accuracy of the human intention model and 756 human satisfaction with cobot decisions in real scenar-757 ios, we designed the simulated experiment and Turing test. 758 We use the simulated experiment to verify the prediction 759 accuracy and the Turing test to test the satisfaction of the 760 decision method with human intentions and whether humans 761 can distinguish cobot decisions.   3. How much do you trust a cobot like a group B when 832 encountering a similar situation? From a score of 5 (very 833 trusting) to 1 (very distrustful). 834 4. After the test was completed, aspects that were not con-835 sidered by the robot were suggested for the robot's improve-836 ment by reviewing the test process.

837
A total of 42 volunteers participated in our experiment, 838 of which 32 participated in the simulation experiment, 839 29 males and three females. All ten volunteers, all males, 840 participated in the Turing test. The participants were 22 and 841 28 years old and were asked about their age, gender, and 842 experience with cobots and computer games. Most of the 843 participants had some experience with computer games, and 844 35% had some knowledge of cobots. When the scenario presented in the paper arises, we hope 847 to obtain the result that the cobot can accurately predict the 848 human intention of passing through each time. The cobot 849 makes accurate decisions by integrating the current situation 850 to ensure safety, improve efficiency, and maximize human 851 satisfaction. In the simulated experiments, participants did 852 not make unified decisions about the actions in each experi-853 ment, so we take the conclusion of the majority of participants 854 in each experiment as the standard.

855
For the simulation experiment, the sum n s of the same 856 decision as the standard decision action is added and divided 857 by the total number of participants n t to obtain the decision 858 unity d u of participants in each experiment, as shown in 859 formula (23). Then the average value and standard deviation 860 of the decision unity d u of 20 experiments are obtained. For the standard human decision in each experiment, the 863 cobot's action decision should be opposite to the human 864 action decision so that the collaborative task is smooth, effi-865 cient, and safe at the highest level. The number of times 866 the cobot's action decision is opposite to the human action 867 decision in all 20 experiments is the correctness of the cobot's 868 action choice S. 869 We counted the number n o of cobot decisions opposite the 870 standard human decision in each experiment. We calculated 871 the percentage n o in 20 experiments to obtain the degree 872 of correctness S of the cobot action choice, as shown in 873 equation (24). As for the prediction of human intention, only 874 our method can obtain the human action intention among the 875 three methods. We compare the IF setK = µ K (x) , ν K (x) 876 of action intentions in passing through and yielding inten-877 tions, get the human action intentions in each experiment by 878 ranking method, and compare the calculated action intentions 879 collected from simulated experiments to predict accuracy. We counted all participants' mean and standard deviation 882 for the unity of participant decision-making, the accuracy of 883 VOLUME 10, 2022    of the cobot. We believe that the higher satisfaction for 922 both our proposed method and the POMDP method is that 923 both types of methods incorporate a human factor of limited 924 rationality rather than simply calculating the gains in effi-925 ciency and safety. Such methods can more closely match the 926 human cognitive process for such scenarios while targeting 927 decisions more consistent with human bounded rationality 928 characteristics.

929
Statistically, the results for questions 1, 2, 3, and 4 in 930 the Turing test are as follows. For problem 1, group A is 931 most often considered a human decision, accounting for 50%. 932 It is closely followed by our proposed collaborative cobot 933 action decision-making method, i.e., Group B is regarded as 934 a decision made by humans, accounting for 40%. In contrast, 935 group C accounted for only 10%. The results show that our 936 proposed method has passed the Turing test, with a percent-937 age of decisions considered to be made by humans greater 938 than 30%. For questions 2 and 3, the mean scores are 4.4 and 939 4.3, respectively, demonstrating that the decisions made by 940 our proposed method are human-like and receive a high level 941 of confidence. For question 4, some participants suggested 942 that the robot did not fully understand human intentions. 943 There was not only passing, yielding, and hesitation in the 944 human-robot collaboration process but also the intention of 945 not actively participating, the neutral intention of only observ-946 ing the cobot's actions, and even the intention of refusing 947 to cooperate with the cobot. For this small part of other 948 intentions, we plan to make it one of our research directions 949 in the future as well.

950
From the experimental results, our proposed method not 951 only improves the robot's efficiency and human satisfaction 952 with the robot but also ensures the safety of HRC. We believe 953 that the reason for this result is that the IF set gives not 954 only the possibility of the human taking the intention to 955 pass but also the possibility of yielding and hesitation. The 956 IF set can better describe the human cognitive process and 957 provide a more accurate human action intention for cobot 958 decision-making.

960
The paper focuses on proximity collaboration scenarios 1 961 and 2, which are an in-depth study of similar studies in the 962 literature [14] that do not accurately predict scenarios. In such 963 scenarios, once both parties involved in the action decide to 964 pass, there is not enough time to take the avoiding action, 965 and it is easy to collide and cause danger. The action choices 966 of participants in this scenario often rely on bounded ratio-967 nal factors such as intuition and personality, and the action 968 decisions of different participants are highly divergent inter-969 nally, so how to better model human cognitive styles in this 970 scenario is the key to solving this problem. However, previous 971 studies lacked research on such situations and could not make 972 predictions and decisions based on human cognitive styles. 973 With the increasing popularity of HRC in both industrial and 974 life scenarios, the frequency of such situations will increase, 975 so it is necessary to model the human cognitive style in this 976 situation. 977 We propose the cobot action decision-making method 978 based on IF set and game theory for this problem. 979 efficiency and safety combined with the bounded rational-981 ity factor of humans and integrated simulation of a human 982 cognitive process. We try to get more accurate human action 983 intention prediction results through the human action inten-984 tion IF set, which is the first innovation point of our research 985 on such situations. We use the established cobot action 986 decision-making method to decide on cobot actions. Our fuzzy set of human action intention proposed in this paper 1036 better reflects the human decision-making method in such 1037 scenarios, and the prediction of human intention, whether 1038 through or not, is more accurate. The safety of cobot action 1039 is ensured from the intention perception level. The disad-1040 vantage of this paper is that our proposed method does not 1041 perform to a high degree in terms of safety, efficiency, and 1042 human satisfaction, with a prediction accuracy of only 86% 1043 for human intentions. We believe this is related to the higher 1044 number of parameters that appear in the model, which need to 1045 correspond to the large number of model parameters that need 1046 to be adjusted to more closely match the action intentions of 1047 most people in different scenarios. The prediction accuracy 1048 of our proposed method does not reach 90%, and the decisive 1049 actions need to be improved in terms of safety, efficiency, and 1050 satisfaction. 4. However, the IF set can only represent affiliation, dis-1052 affiliation, and hesitation and do not represent neutral and 1053 rejected cognitions, so the model does not achieve more than 1054 90% accuracy in predicting human action intentions. In our 1055 future work, we will optimize the model using picture fuzzy 1056 sets [25], [26] and similar improvement methods to improve 1057 the prediction model's performance. 1058 5. The simulation experiment is done in a laboratory 1059 environment using a computer game and cannot represent 1060 the choice of human responses in a natural setting. In a 1061 real industrial scenario, workers will be influenced by the 1062 impact of the work environment, task efficiency require-1063 ments, cobot actions, and other factors, which may produce 1064 different requirements for the cobot's action decisions which 1065 is the future research direction of HRC. 1067 We design a cobot action decision-making method based on 1068 the IF set and game theory. The decision-making method 1069 integrates the three factors of human action intention, safety, 1070 and efficiency and provides an effective strategy for the cobot 1071 action decision. This method models human action intentions 1072 through CPT and IF set to reason about the human action 1073 intention in situations where there is no optimal trajectory for 1074 proximity HRC to achieve collision avoidance. The optimal 1075 action of the cobot is also calculated based on the IF set of 1076 two Nash equilibrium solutions for the human-robot scenario, 1077 similar to the static chicken game. Through experiments, 1078 it was found that the present method can achieve an accuracy 1079 of 85.22% in predicting human action intention. At the same 1080 time, the participants' satisfaction with the cobot's decision 1081 can reach a satisfactory level. Unlike the study in the liter-1082 ature [14], our study focuses more on situations where the 1083 human-cobot distance and speed are very close, with a high 1084 risk of collision. Also, different from the study in the literature 1085 [12], we consider the human as a boundedly rational agent 1086 and simultaneously consider the three action intentions of 1087 passing through, yielding, and hesitating. It is possible to 1088 predict human action intentions better and provide a reference 1089 for the cobot's action decision-making.

VII. CONCLUSION
for cobot actions when both humans and cobots compete for 1092 the shared space. We analyzed the bounded rational way of human cognition for efficiency and comfort and established the IF set of humans for efficiency and comfort, respectively. prehensively analyzing three factors: human action intention, safety, and efficiency. a spatial conflict arises between a human and a robot in