Multiattribute Evaluation Model Based on the KSP Algorithm for Edge Computing

To solve the problems of single evaluation attributes and highly overlapping trust paths in the current trust model, a multiattribute trust evaluation model based on the K shortest paths (KSP) algorithm is proposed. The model refines the evaluation attributes among nodes and uses the analytic hierarchy process (AHP) to allocate the weights based on users’ preferences to meet the special needs of individual users. Also, the model introduces the penalty factor algorithm idea of KSP and proposes a trust path optimization algorithm RKSP based on the A* algorithm. It can filter highly overlapping trust paths during the formation of recommended trust paths so that the searched trust paths have certain differences. Through comparative experiments, it is proven that the model can reduce the resource overhead of edge devices, improve the accuracy of evaluation, ensure load balancing within the domain, and better align the results of the model recommendation with user needs.

and improve system reliability [11]. Compared with the cloud computing environment, the traditional centralized security mechanism is no longer suitable for the fully distributed edge computing architecture. The edge layer contains a large number of high-frequency interactive devices, and the number of nodes and trust information is growing exponentially. While edge layer devices are mostly resource-constrained devices, massive trust information easily leads to information overload, causing node overload and early decline [12], [13]. A trusted and lightweight distributed trust evaluation mechanism is urgently needed. Therefore, the trust model in the edge computing environment has gradually become a research hotspot.
In the face of limited resources and open edge computing, many experts and scholars have adopted different methods for building trust evaluation models for edge computing environments [14]- [17]. In addition to the evaluation model based on subjective logic, Dempster-Shafer evidence theory, Bayesian networks and other trust evaluation models, there are also models based on recommendation node similarity, scoring deviation mixed methods, multiattribute evaluation, and so on. Next, we will discuss the following three aspects: In terms of improving effectiveness, Deng et al [18] proposed a multiobjective optimization and collaboration scheme based on comprehensive trust, which optimized the edge computing resource management and collaboration system by using the trust evaluation system, and then improved the accuracy of the model. Huang et al [19] weighted different trust dimensions according to familiarity, similarity and timeliness and then maintained and updated the trust information of local vehicles by using a vector machine and multiweight subjective logic. He et al [20] combined the Bayesian reasoning method and D-S evidence theory and optimized the uncertainty in the trust evaluation of mobile social networks by using a deep learning algorithm. It could significantly reduce the adverse impact of biased opinions. However, the above model did not consider the unreliability of the trust model and ignored the possibility of malicious recommendation. To this end, Ruan et al [21] proposed a trust management framework based on measurement theory, which regards the measurement error of trust evaluation as confidence, measures the reliability of equipment trust evaluation, and improves the accuracy of the trust value. Ren et al [22] introduced the blockchain consensus mechanism to prevent trust data from being forged and tampered with. This model provided differentiated trust management options for devices with different computing and storage capabilities, which did not rely on trusted third-parties and interdomain trust assumptions.
In terms of reducing resource overhead. Jie et al [23] proposed a trust evaluation model based on multisource feedback, which added feedback trust from the base station to the traditional trust relationship. Additionally, the entropy weight method was used to aggregate the multisource feedback trust to enhance the adaptability of the trust model, but malicious feedback information was not filtered. Gao et al [24] proposed a dual filtering K-means clustering algorithm, which effectively filtered the feedback of low similarity and malicious devices in the current task context and improved the computing efficiency while resisting malicious attacks. Kammoun et al [25] regarded the base station at the edge of the network as a trusted third-party and proposed a single-hop clustering mechanism based on node density, trust and node energy level. In this scheme, the energy consumption of nodes is fully considered, and the resource consumption of trust computing is reduced.
In the multiattribute evaluation model, Ma et al [26] proposed a trust evaluation model based on multi service attributes for the cloud service environment. The service requester determines the transaction object by integrating multiple service attributes. Then, according to the actual service quality, the corresponding rewards or punishments value will be given. However, the model is only a simple framework that does not have specific attributes and attribute weights. For the threats in mobile ad hoc network (MANETs) Khan et al [27] set the generation and packet loss rate of control packets as the attributes of nodes, and compared the trust value with the threshold value to determine the credibility of nodes. However, the attribute weight is directly given by the model based on the importance of the current time attribute, which is not sufficiently rigorous. Ma et al [28] proposed a model based on integrated trust. The attributes are divided into public attributes and trust attributes. This model pays more attention to the identity trust of nodes, and does not emphasize the behavior attribute of nodes very well.
These researches work effectively promoting the development of trust evaluation models in edge computing and enhanced the reliability of the system. However, most of the current trust models still have the following shortcomings: (1) The existing multiattribute trust evaluation model does not consider the multiattribute problem of behavior trust and cannot reflect the subjectivity and complexity of the trust relationship between devices.
(2) The existing trust evaluation model based on graph theory ignores the deviation of evaluation results, which is caused by the high overlap of trust paths. This model has difficulty in resisting collusion attacks between devices and it also increases the calculation costs.
To solve the above problems, this paper introduces the K shortest paths (KSP) algorithm and proposes a multiattribute evaluation model based on the KSP optimization algorithm. First, the multiattribute trust evaluation model is constructed to refine the evaluation attributes, and the attribute weight is determined by using the analytic hierarchy process (AHP). Second, to reduce the recommendation of the same node during the formation of the recommended trust path, the trust path optimization algorithm RKSP is proposed to solve the trust dependence problem, and it filters highly overlapping trust paths based on the A * algorithm and penalty factor idea. Finally, experiments verify that the model can overcome the load balance problem of the current trust model and improve the efficiency of trust calculations.
The remainder of the paper is organized as follows. In Section 2 we introduce the architecture and working principle of the trust evaluation model for edge computing. We detail the multi-attribute trust evaluation model and define the trust relationship between devices in Section 3. In Section 4, we present our RKSP algorithm based on the penalty factor. We describe experimental settings and analyze our experimental results in Section 5. The conclusions are in Section 6.

II. SYSTEM ARCHITECTURE OF THE TRUST EVALUATION MODEL FOR EDGE COMPUTING
Edge computing is a distributed structure, which is a decentralized computing architecture. It moves applications, user data and various services from the original network center node to the edge node in the network logic for processing and provides near-earth storage, computing and other functions [9].
The trust model architecture based on edge computing cuts the edge layer into smaller and more manageable areas for management, which is called the management domain (MD). It transforms the original cloud computing center processing into edge server (ES) processing. Each MD consists of an ES and its subordinate edge devices (ED). The ES is responsible for creating, evaluating and updating the trust relationship VOLUME 8, 2020 in the region, ensuring the operation of the trust model and the accuracy of the trust evaluation results. The EDs are divided into different ES according to their location and characteristics and are managed by the servers. To realize the sharing of trust information, the devices can dynamically adjust and update the trust relationship with other devices.

III. MULTIATTRIBUTE TRUST EVALUATION MODEL
In practical applications, the expected service attributes of different ED are different; these different needs and preferences ultimately lead to different evaluation results in the services provided by the same object device across different main devices. For example, in the task of video caching, devices focus more on the speed of video caching than download quality, download cost and other factors. Therefore, interactive devices with fast response times obtain higher trust values. However, this does not mean that the device will perform well in completing the service tasks required by other devices. Therefore, in the process of evaluation, the equipment needs to consider the completion of the task from many aspects, such as response time, execution cost, reliability, availability, etc., to provide a more appropriate evaluation.
Accordingly, this section constructs a trust evaluation model based on multiple attributes, and gives the description and calculation equation of the trust relationship between devices.
Assume that the all-devices set in the domain is ED = {ed 1 , ed 2 , ed 3 , · · · , ed n }, and the multiattribute evaluation set is ATTR = {attr 1 , attr 2 , attr 3 , · · · , attr m }. Equipment ed i is measured according to attribute attr j , and attribute value o ij of ed i about attr j is obtained. Then, the following decision matrix is formed: Because different attributes have different physical meanings and value ranges, it is difficult to directly carry out a comparative analysis; therefore, it is necessary to perform standardization. In this paper, the range transformation method is used to deal with the cost type attribute and the benefit type attribute by equation (2), and the final decision matrix R is obtained.
The equation of matrix element r ij is: ij represents the minimum one, i represents the equipment number, and j represents the attribute number. In addition, attr j represents the current attribute.
The final standardized multiattribute matrix is:

A. DIRECT TRUST
Direct trust is the trust evaluation obtained by the device based on its own historical interaction information and the result of aggregating multiple attributes. To weigh the influence of different attributes on the device trust value, it is necessary to determine the weight of each attribute, and different devices have different weight vectors W i = [w 1 , w 2 , w 3 , · · · w m ] T . In multiattribute decision-making problems, weight determination methods include the entropy method [29], fuzzy clustering method [30], principal component analysis method [31], and analytic hierarchy process (AHP) [32]. In this paper, AHP is used to confirm the weight, and the specific operation process is as follows: (1) According to the nine-scale method (Table 1), the two attributes are compared to obtain the weight judgment matrix A = (a ij ) m×m , and the element a ij indicates the importance of attribute attr i over attribute attr j under the subjective judgment of the equipment, where a ii =1, a ij = 1 a ji . (2) According to the weight judgment matrix, the weight equation of attribute attr i is: where w i is the weight of attribute attr i , a ij and a kj are the elements of weight judgment matrix A, and m is the number of attributes.
(3) Because the importance of attributes is based on the subjective judgment of users, it may not meet the consistency principle required by AHP, so the consistency ratio of the matrix needs to be tested.
The equation of consistency ratio CR is as follows: where CR is the consistency ratio, CI is the consistency indicators, and RI is the average random consistency index,and the standard value of RI is given in Table 2. λ is the maximum eigenvalue of matrix A, m is the number of attributes, and w i is the weight of attribute attr i . According to the regulations, when the consistency ratio CR is less than 0.1, it is considered that the matrix has passed the consistency test and the attribute weight can be calculated accordingly. If not, the weight judgment matrix should be modified and then tested.
In summary, let W i = [w 1 , w 2 , w 3 , · · · w j ] T be the weight vector of the current node, where w j is the weight of attribute attr j and satisfies w j = 1(0 ≤ w j ≤ 1). The comprehensive evaluation of the equipment is as follows:

B. RECOMMENDED TRUST
Recommendation trust is a trust evaluation based on the aggregation of trust opinions of other trusted neighbors. As shown in Figure 2, the recommended trust is passed through the nodes that interact with V i or V j , that is, the path from V i to V j . Each trust path corresponds to a recommended trust value, reflecting a recommended trust relationship between nodes. It takes into account the feedback of multiple recommended trust paths and can describe the objective credibility.
According to the trust transfer and aggregation rules, the recommended trust calculation equation is as follows: where F ij refers to the aggregate value of recommended trust of all trust paths between two nodes; that is, the global recommended trust of node V i to node V j , and k represents the total number of trust paths between nodes. P x i→j refers to the recommended trust value calculated from the x-th trust path from source node V i to destination node V j , and e i→L1 refers to the recommended value of node V i to node V L1 ; L1, L2. . . , Ln represent the number of nodes passed by the x-th trust path.

C. COMPREHENSIVE TRUST
Comprehensive trust is the global trust of a device to another device, which is the final trust value obtained by aggregating the direct trust and recommended trust in some way. When there is no direct interaction record between nodes, the recommended trust is regarded as the comprehensive trust to build the trust between unfamiliar nodes. To improve the reliability of trust, and overcome the limitation of subjective assignment, an adaptive aggregation method based on information entropy theory is used.
Entropy weight method uses the difference between information to weight attributes, and effectively corrects the degree of difference between trust values, which is more objective. However, while using this method, we need to have a certain sample size to determine the weight through the sample. VOLUME 8, 2020 Based on the theory of information entropy, the comprehensive trust Tij is calculated as follows: (11) where ω 1 and ω 2 are the adaptive weights of direct trust and feedback trust, respectively, and their calculation equation is as follows: where H (D ij ) and H (F ij ) are the information entropy of direct trust and feedback trust, respectively, and their calculation equation is as follows:

IV. RKSP OPTIMIZATION ALGORITHM BASED ON A PENALTY FACTOR
In a large-scale trust graph, especially in the case of a large number of nodes and frequent interaction in the edge computing environment, the number of trust paths between two nodes is usually very large. The search and aggregation of trust paths require considerable time and space resources. If all trust paths between two nodes are exhausted, the calculation cost will increase. In addition, there may be a large number of shared nodes or shared edges in multiple trust paths between two nodes that will lead to an overreliance of the obtained trust information on the recommendation value of some nodes. Furthermore, collusion attacks are easily caused between malicious nodes and make the nodes misjudge recommendations. Therefore, this paper optimizes the trust path by limiting the number of trust paths and the repetition between paths. Assume that Figure 3 is a trust subgraph composed of source node S and destination node E in a large-scale trust graph. Due to the limited space, some nodes and edges are omitted.
The trust path from source node S to destination node E is shown in Table 3. From Table 3, it can be seen that the first three of the five trust paths from node S to node E pass through the trust edge S→1, indicating that these paths all depend on the recommendation information of node 1. If node 1 is a malicious node participating in collusive attack [33], then it can cheat the source node S by providing recommendation information that denigrates honest nodes and exaggerates similar nodes. Therefore, the recommendation information obtained by node s is no longer reliable.
When the first k paths selected by the model are derived from several nodes, the final trust value will depend on the trust recommendation of these nodes. Therefore, the whole trust network is controlled by these nodes. Therefore, the highly overlapping trust paths of nodes or trust edges will greatly affect the multisource trust recommendation, not only rendering it difficult to resist collusion attacks between nodes but also leading to an unreliable trust evaluation model.
Although the KSP algorithm can reduce the search time of the trust path, it will also inadvertently increase the risk of the trust model if it does not restrict the path search method. To overcome the limitations of the traditional KSP algorithm, the paper adds constraints in the search process of the trust path and finally obtains multiple trust paths with differences.

A. KSP ALGORITHM PROBLEM DESCRIPTION
Given a trust graph G, there are multiple different trust paths from one node v i to another node v j , and the resulting set pathset(G, v i , v j ) = {path 1 , path 2 , path 3 , · · · , path k } (k ∈ N * ) is called the path set between vi and vj on G. The top k paths between v i and v j on the trust graph G are found, and the repeatability of any two paths that meets the requirements is determined, which is called the problem of the top k shortest paths that meet a certain degree of repetition.

B. RKSP ALGORITHM
Because the lossy algorithm in the KSP algorithm has the advantage of computing efficiency, it can meet the needs of a large-scale trust graph in the edge environment. Therefore, based on the idea of the penalty factor in the lossy algorithm, by introducing the repeatability limit factor and A * algorithm, this paper proposes a trust subgraph optimization algorithm (RKSP) that satisfies the repeatability constraint.
The algorithm idea of the penalty factor is as follows: First, the Dijkstra algorithm is used to obtain the shortest path between two nodes, and the edge of the path is penalized, i.e., the weight of the edge is multiplied by the penalty factor. Then, the algorithm cycle Dijkstra algorithm is used to find the remaining path. After the penalty, the weight of the edge in the graph changes, which can prevent the edge from being traversed in the next search to reduce the repetition of the path.
According to the characteristics of the ED and trust recommendations, this paper improves the KSP algorithm as follows: (1) The scope of the penalty is extended to all derived edges of nodes contained in the current path; (2) The Dijkstra algorithm is used only when searching the first path, and the remaining paths are searched by the A * algorithm.
The above improvements can effectively reduce the proportion of the recommended value of the node in all the recommended trusts and improve the reliability of the evaluation model. Moreover, the A * algorithm can locate the next-hop node quickly, reduce the number of access nodes, and reduce the search cost of the algorithm.

1) A * ALGORITHM
The A * algorithm is a direct search algorithm in the shortest path algorithm that is suitable for complex large-scale graphs. By setting a heuristic function, the path searching process can be more directional and faster. The A * algorithm uses equation (17) to search the path: where f (v) represents the total cost when the path passes through node v, g(v) represents the actual cost from the source node to the current node v, and h(v) represents the estimated cost from the current node v to the destination node. As the heuristic function of the algorithm, h(v) is more accurate, and its efficiency is better.

2) HEURISTIC FUNCTION
The heuristic function h(v) is the distance set D from each node to the source node. Set D can be obtained by the Dijkstra algorithm and changes with the penalty for the trust path. The penalty equation is as follows: where α is the penalty factor, and the specific value depends on the node tolerance.

3) REPEATABILITY FUNCTION
The repeatability function is used to calculate the repeatability between paths and add the trust paths that meet the repeatability requirements to the path set. Assume there are two trust paths path i and path j , where the repetition of path i with respect to path j is equal to the ratio of the number of the same trust edges in the two paths to the number of all trust edges in the trust path, as shown in equation (18).
Set the threshold value of repeatability to θ, and the value of θ can be adjusted according to the user's tolerance.
repetition path i , path j = SameEdge path i , path j path i (18) The pseudocode of the RKSP algorithm is shown in Algorithm 1. G(V , E), source node s, destination node e Output: Path set Pset 1. Dijkstra(G, s, e) 2. get a path p and a set D

Algorithm 1 Input: Trust graph
if v == e then 11. go to line 19 12. if v ∈ closed then 13.
add u in closed 18.
remove u from openlist 19. while v == s do 20.
for v ∈ p do 26. D where openlist represents the set of waiting access nodes, closed represents the set of visited nodes, p represents the current trust path, Pset represents the set of trust paths, K th represents the number of paths to be found, father(u) represents the parent of node u, and Dst(u) represents the adjacent nodes of node u. First, the Dijkstra algorithm is used to preprocess the trust graph, the shortest path p from the source node to the destination node and the shortest path distance set D from each node to the destination node are obtained. Then, distance set D is used as the heuristic information of the A * algorithm, and the A * algorithm is called to search the remaining paths. Additionally, the repeatability function is used to determine whether the path meets the repeatability requirements until Kth trust paths are finally obtained. When the number of paths is less than Kth VOLUME 8, 2020 at θ, the limit of repeatability can be relaxed by setting the upper limit count of cycle times and the increasing factor θ of repeatability. It can avoid the algorithm falling into a dead cycle until the path reaches Kth or the upper limit of repeatability.
The penalty factor can be used to make the trust path after each cycle different from the last traversal result. In addition, the search of the A * algorithm is directional, which can greatly reduce the number of access nodes and improve the efficiency of the trust path search. Therefore, it can make recommendation information more multisource and, closer to the actual situation.

V. SIMULATION EXPERIMENT
To verify the reliability and resource cost of the model, MATLAB is used in this paper to carry out simulation experiments that are compared with the RLTS (reliable and lightweight trust scheme) model [23] and the RFSN model [34]. The simulation detection area was 200 m×200 m square, and 200 nodes were randomly placed to simulate the ED with limited resources.
To make the experiment closer to the real edge computing environment, the nodes were divided into the following categories: (1) Honest nodes were 90% likely to provide high-quality services and simulate node anomalies caused by nonintrusive factors; (2) Honest nodes had three states: idle, normal and busy. In different states, honest nodes had different probabilities of refusing service requests, which were 10%, 20%, and 40%. Honest nodes simulated the ED with different degrees of being busy. For a node, the probabilities of the above states were 50%, 30% and 20%.
(3) Malicious nodes were divided into two categories: one provided malicious services and provided dishonest recommendation information to other nodes, and the other provided honest services and provided recommendation information that denigrated honest nodes and exaggerated similar nodes. The above nodes accounted for half of each; (4) All nodes were divided into three categories: fast, general and delay. Different types of nodes had different response times for the same task. The nodes simulated the ED with different collaboration speeds. The proportions of the above nodes were 50%, 30% and 20%.

A. PROPERTY SETTINGS
According to the characteristics of the ED, the following attributes were selected to evaluate the device in the experiment: reliability (R 1 ), availability (R 2 ), response time (R 3 ), and node residual energy (R 4 ).
(1) Reliability (R 1 ): the ability of the device to complete interactive tasks that is described by the success rate of the service. The node reliability equation is as follows: where c s is the number of successful interactions, and c accept is the number of times to accept interactive tasks.
(2) Availability (R 2 ): the ability of a device to respond to task requests. When the device is suspended due to failure, being busy and other reasons, it cannot respond to the service request of other devices. The node availability equation is as follows: where c accept is the number of times to accept interactive tasks, and c apply is the total number of times to apply for tasks.
(3) Response time (R 3 ): the time from the service request to task completion that is provided by the device to the edge server.
(4) Node residual energy (R 4 ): the current energy value of the node that is provided to the ES by the node to be evaluated. It reflects certain subjectivity.
Since the attributes R 1 and R 2 are the values of [0, 1], only attributes R 3 and R 4 needed to be normalized by equations (2) and (3). In addition, attributes R 2 , R 3 and R 4 have no delivery function and cannot be delivered through the trust path. Therefore, only attribute R 1 (node reliability) was delivered for trust, and other attributes were recommended for calculation at the end of the recommendation.

B. PARAMETER SETTING 1) K-VALUE
The source node and the destination node were randomly selected from 200 nodes to obtain the trust subgraph between the two nodes. The change degree of the recommended trust value between nodes was observed when the k-value gradually increased, as shown in Figure 4. It can be seen from Figure 4 that when k reaches 20, the fluctuation degree of the curve is greatly reduced, and the difference of the trust value is within 0.02. Then with the increase in the k-value, the trust value gradually reaches a stable state. When the proportion of malicious nodes was 10%, the trust value tended to be stable when the number of paths reached 20. Moreover, with the increase in the number of malicious nodes, the k-value increased when trust value reached a stable state.
Considering the different malicious ratios, we set the k-value to 40 for the subsequent experiments. In fact, different trust subgraphs may lead to a change in the k-value, but the fluctuation is not too large. This experiment was performed solely to establish a reference.

2) PENALTY FACTOR α
The setting of the penalty factor should not only consider the difference degree of the trust path obtained in the next cycle but also consider that the trust value cannot be lower than the untrusted node after the penalty. The initial trust value of the trust side was set to 1 as an experimental example. The results are shown in Figure 5. As seen from Figure 5, the larger the value of α was, the stricter the limit on the number of repetitions of the trust edge. When the value of α was 1.1, the influence on the trust value was too small to constrain the repetition. When the value of α was greater than 1.4, the trust value was lower than the middle value of 0.5 after two penalties. For the same α value, the lower the initial trust value was, the faster the trust value decreased, and the number of repetitions that could be tolerated also decreased. Therefore, the value with a higher tolerance was selected in this paper and the value of α was set to 1.2. In fact, the value of the penalty factor can be adjusted according to the tolerance of nodes.

3) REPETITION LIMIT FACTOR AND REPETITION INCREASING FACTOR
For the repeatability limit factor θ, the smaller the initial value was, the more stringent the repeatability limit of the trust edge. However, this approach also easily caused the number of paths to be less than k, which made the limiting factor increase continuously and increased the complexity of the algorithm. Similarly, if the repetition increasing factor θ was too small, the tolerance could not be increased. If it was too large, the tolerance could increase too much. The relationship between repeatability and path length is shown in Table 4. In practice, because the average path length of the sparse matrix is different from that of the dense matrix, it is impossible to measure the path repeatability with a fixed value. Starting from a stricter repeatability and then using the increasing function to gradually relax the repeatability can be considered. In this paper, the limiting factor θ and repetition increasing factor θ of repeatability were set to 0.2.

C. RELIABILITY EVALUATION 1) DIFFERENTIATION OF PERSONALITY PREFERENCE
The weight was calculated by AHP. It was assumed that the preference degree of device A for each attribute was reliability > availability > response time > node residual energy.
According to the nine-scale method, the following matrix can be obtained: This experiment mainly verified whether the trust model can better reflect the influence of subjective preference on the trust value after introducing a multiattribute evaluation mechanism. This model was compared with the trust model that relies only on reliability evaluation and the trust model that used the average weighting method, and its performance in distinguishing personality preferences was compared. The experimental results are shown in Figure 6.
It can be seen from Figure 6 that due to its coarse granularity, the evaluation model of a single attribute no longer had a large change after the trust value tended to be stable. After the introduction of the multiattribute evaluation model, due to the influence of each attribute, the trust value had a significant change with the increase in interaction times. The two weighted methods reflected the influence of different weight vectors on the trust value of nodes. The degree of fluctuation of the trust value obtained by AHP was more obvious, which showed that the method can better distinguish the subjective bias of the node without affecting the overall trend of the trust.

2) ANALYSIS OF THE INTERACTION SUCCESS RATE
The successful interaction rate refers to the ratio of the number of successful interactions between devices to the total number of interactions, which is used to measure whether the trust model can effectively resist the fraud of malicious nodes.
To verify the reliability of the scheme, 100 interaction cycles of simulation experiments were carried out to investigate the change in the interaction success rate of the trust model with the increase in malicious node proportion. Each node initiated a service request in each cycle, and the node could be both a service requester and a service provider. The change in the interaction success rate with the increase in malicious nodes is shown in Figure 7.
With the increase in the proportion of malicious nodes, the interaction success rate of each trust model showed a downward trend to varying degrees. However, the interaction success rate of the model in this paper had little difference, and the curve was relatively flat. This is because, in the process of obtaining recommended trust, the RKSP algorithm filters highly overlapping trust paths by penalizing the generation value of repeated nodes, which makes the trust path more different and can effectively filter the recommendation value of malicious nodes to better reflect the behavior of nodes.

3) ACCURACY EVALUATION OF TRUST MODEL
Due to the influence of many uncertain factors, measurement error will inevitably exist in the process of trust evaluation. Model accuracy refers to the degree of agreement between the measured value and the real value, which is usually measured by error. The smaller the error, the higher the accuracy.
To measure the accuracy of trust model, this paper uses the mean absolute deviation (MAD) to calculate the error. The calculation equation is as follows: where T ji is measured trust value of node j to node i, A i is actual trust value of node i. and NP is the total number of entity pairs with trust relationships. N 1 , N 2 represent the service requesters set and service providers set.
In the simulation environment, 30% malicious nodes are randomly selected to compare the accuracy with other model. The experimental results are shown in Figure 8. As can be seen from Figure 8, The MAD values of all models were relatively higher at the beginning, and decreased with the increase of cycles, which finally reached a stable state. The main reason was that the behavior of collusive malicious nodes was relatively hidden, and it was difficult for honest nodes to identify such malicious nodes in a short time. However, the MAD value of the model in this paper could quickly reach the stable state, and it was always the smallest. This is because the RKSP algorithm could filter repeated trust paths quickly. Also, the punishment of malicious nodes and the aggregation method based on information entropy could make the trust evaluation more accurate.

D. TIME COST
The time cost of the trust model mainly comes from the calculation of the comprehensive trust value, so the total time of the comprehensive trust aggregation is used to evaluate the calculation efficiency of the whole model in this paper. In the simulation environment, 30% of malicious nodes were randomly selected, and each group of experiments was conducted 5 times to obtain the average value. Figure 9 shows the time overhead change from 100 nodes to 1,000 nodes. In Figure 9, when the number of nodes was small, the time cost between models was very close, but with the increase in network scale, the difference between models began to increase. In this paper, the growth rate of the time cost was relatively slow, and it was gradually lower in the proposed model than in the other two models. The reason for this was because, in the process of obtaining recommended trust, the RKSP algorithm used the KSP algorithm to reduce the number of search paths. Then, the highly overlapping trust paths were eliminated through the limit of repeatability, and the number of paths participating in the calculation was reduced. In addition, the A * algorithm could improve the search efficiency and greatly reduce the workload of trust computing.

E. ENERGY COST
Considering the energy cost, the model is verified from the energy change in the network in the domain and the node survival. Assume that the initial energy of each node is 0.5 J, and the energy consumption of each transmission and reception of data packets is 50 nJ/bit, where J represents Joule and the unit of data packets is a bit. Any data in the range [3,000, 4,000] was selected, and the size of three model packets was the same in the same interaction cycle. There were 1,000 simulation cycles in the experiment. The change in network energy in the region is shown in Figure 10, and the number of network nodes surviving is shown in Figure 11.  With the increase in the number of interactions between nodes, the network energy of each model showed a downward trend. The energy consumption curve of the proposed model was relatively flat. The overall network energy of this model was also higher than that of the other two models. Because the recommended trust was stored only in the ES, it completed the trust recommendation process without the participation of the resource-constrained node. The node needed only to send the trust information and query request to the server. Therefore, the model could effectively reduce the energy consumption in the process of trust value transmission and improve the network life cycle.
As shown in Figure 11, with the increase in interaction times, each model gradually exhibited node decay to different degrees. The RLTS model and RFSN model exhibited node death after 500 rounds and all nodes died after 2,000 and 2,500 rounds, respectively. In the proposed model, node death began after 1,500 rounds. In this paper, the energy of nodes was added to the multiattribute set. In the case of similar values of other attributes, the nodes with high energy were preferred for interaction, which could avoid the premature failure of nodes with high trust values. This shows that the multiattribute model can guarantee the load balance in the domain and prolong the lifetime of the whole network to a certain extent.

VI. CONCLUSION
With the rise of edge computing, the relationship between edge devices was becoming more complex, and the quantity of data in the network was increasing daily. It was difficult for edge devices to undertake complex storage and trust aggregation tasks. However, the quantity of data was not equal to the amount of information, which would result in deviations of the trust value and increase the computing load of the node. Therefore, based on the traditional trust evaluation model, a multiattribute evaluation model based on the KSP optimization algorithm was proposed. First, based on the multiattribute of service, the users' preference was fully considered. Second, the KSP algorithm and A * algorithm were introduced into the field of trust evaluation, which not only ensured the efficiency of calculation but also improved the reliability of the model. The experimental results showed that this method could improve the success rate of interaction, effectively inhibit the fraud of malicious entities, and ensure the honest of the edge device interaction.
In future research, we will consider the evaluation method for an unknown weight and evaluate more attributes to enhance the objectivity of trust model. KUNQI XU was born in 1995. Her main research interests include trust evaluation and trusted computing.
XIAOYAN LIANG received the master's degree in computer application and technology from North China Electric Power University, in 2007, and the Ph.D. degree in computer application and technology from Beihang University, in 2016. She is currently a Teacher with Hebei University. Her interests are in network security and semantic analysis. VOLUME 8, 2020