Multi-dimensional Trust Quantification by Artificial Agents through Evidential Fuzzy Multi-Criteria Decision Making

Increasing man-machine trust has burgeoned during the last few decades. The growing interest in trust-building has led to the study of the non-dichotomous nature of trust. Trust as social behavior is an integral part of effective team building. The major focus has been offered to study how humans build trust towards machines, whereas few attempts have been made to study the reverse. Studies have shown that trustworthiness perceptions initialize trust behavior whereas trust behavior influences subsequent trustworthiness perceptions. This paper presents the design and comparative analysis of evidential fuzzy multi-criteria decision-making (EFMCDM) based on multi-dimensional trust quantification schemes to quantify trust level with the human agent in a collaborative environment.


A. Trust
Trust is being defined as the willingness of a party to become vulnerable towards the action of another party with the expectation that the other party will perform the action that is important for the first party, without monitoring or controlling the other party [1]. A lack of trust exists when one party does not have faith in the competencies of another or questions the motivation of the other to take the promised action seriously [2]. Trust can be seen as a relationship between two or more individuals in which one perceives that the others are involved, are competent, will complete their fair share of the work, and will make an honest effort to meet commitments. Trust is important in teams because it lowers transaction costs [3]. Individuals, who do not trust fellow team members, are more likely to monitor or double-check each other's work to ensure the quality of the team's output. This self-protective activity increases the amount of time and resources needed to complete a project. While trust is important in all teams, it is crucial in virtual teams where members generally do not meet face-toface. In virtual teams, trust becomes an important component in preventing psychological distance and it increases confidence in relationships by promoting open information exchange. Trust is often referred to as the glue that holds the virtual team together. Trust has been considered as a determinant of effectiveness in collaborative tasks in teams [4]. Output produced by a well-functioning team should be superior to the output of any individual. Individuals who trust each other usually will be more contented with the team experience. Better team recital and satisfaction is subsequent of mutual trust relationship. In collaborative teams; for an agent either human or artificial agent, it is likely to be self-interested and may be unreliable. Such properties may come from the fact that the agent needs to cooperate to achieve its personal goals more than a common goal. In relevant situations despite the uncertainty of the system interaction an artificial agent cannot afford to be non-interactive since its goals are unachievable without external help. Therefore, the agent needs to rely on the other agent (human) to cope up with the difficulties of goal achievement. In this context, several formalisms can be used to describe manmachine collaborative teams, among several approaches; probability theory has been found and adopted widely to model trust. This is since probability represents systems with high uncertainty and risk. The probabilistic approach may refer to three different sets of tools [5]: Statistical Inference: Statistical inference is a process of modeling and estimating probability function to a random process. Statistical modeling is based on defining a function that could be used to represent the system, whereas the inference resides on the estimation of that function. Probability Theory: Probability theory is a mathematical domain that combines the tools used to study probability as mathematical objects and the relationship and properties of these objects. The major objective of probability theory is the concept of random variables and stochastic processes. Decision Theory: The decision process ads applicative decision making on probability theory. Decision problem definition clarifies the tools to be used in statistical inference while estimating the probability function of the system. Associating decision theory to the field of application of trust leads to defining trust models representing the system. Set of properties of the system as evaluation of trust, together with the statistical inference process in understanding trust actions are delineated through a critical understanding of its antecedents [1]. Mayer et al. depicted trustworthiness as one of the predictors of trust intentions and trust actions [1]. Trustworthiness is an information-oriented perception of a trustee. In either case, the possibility for inaccuracies exists; however, the perceptions may impact behaviors irrespective of their accuracy. The trustor especially gets influenced to develop a desire to trust in the early stages, whereas trust beliefs and trust actions impact the trust process in later stages of interaction [6,7]. This work adopts an approach to consider familiarity of the trustee as trustor's effect, trustworthiness as trust beliefs, and trust actions over time which is consistent with Jones and Shah [7]. In most cases, decision theory provides a way to estimate the probability of the system based on previous interactions. In this work, we associate probability theory with the field of application of trust. Defining trust model in our view amounts to identifying a probability model representing the system, the set of properties of the system as mathematical objects of trust, together with the statistical inference process, that, in most cases, will provide a way to estimate the trust probability of the system, with random variables.

B. Antecedents of Trust
Understanding trust actions are delineated through a critical understanding of its antecedents [1]. Mayer et al model depict trustworthiness as one of the predictors of trust intentions and trust actions. Trustworthiness is an information-oriented perception of a teammate. In either case, the possibility for inaccuracies exists; however, the perceptions may impact behaviors irrespective of their accuracy. Especially the trustor gets influenced to develop a desire to trust in the early stages, whereas trust beliefs and trust actions impact the trust process in the later stages of interaction [6,7]. This work adopts an approach to consider familiarity of the trustee as trustor's effect, trustworthiness as trust beliefs, and trust actions over time which is consistent with Jones and Shah [7].

Trustworthiness:
Trustworthiness is the trustor's perception of the trustee which is an important antecedent of trust [8]. Perceived trustworthiness has been theorized as the perception of trustors regarding the competence of trustee's competence, benevolence, and integrity. These perceptions ascribe motives to the trustee's motivation for action [9]. Therefore, the trustworthiness perception of the trustor is a function of the interaction of trustor and trustee as the trustor processes trustee's information. Trustworthiness perception is the credited beliefs of the trustor which are not necessarily factual since the perceptions may or may not be accurate. The trustworthiness beliefs become more accessible as the relationship develops as more information is available. Through mature interactions, the trustor is more likely to depend on the behavior of the trustee rather than dispositional factors [7] [10]. Research has revealed that the trust behaviors from one individual cause trust behavior from the other; which in turn highlights the trustworthiness of others [11]. The initial trustworthiness perception has a significant influence on later trust behaviors in dyads.

Familiarity:
Trust has an essential aspect in multi-agent collaborative environments [12]; therefore the knowing the trust antecedents is crucial to obtainers, benefactors and intermediaries. Research has shown that in parallel to trustworthiness perception familiarity also has a distinctive influence on trust-building mechanisms [13]. The general premise is that the familiarity of the trustee is based on preceding interactions and experiences [14]. Familiarity serves as a precondition for the trust that makes an individual develop confidence in each other's trustworthiness [15]. It allows relatively safe Expectations about future behavior and absorbs the residual risk. [15]. Consequently, trustee's familiarity is an trust antecedent that aids to provide the context to clarify future expectations that are based on previous interactions. [13]. Several empirical studies have revealed that the trustor's satisfaction during previous interactions determines his trust in the trustee [16,17]. Satisfaction during the previous course of interactions not only affects the trust level but also induces better usage and familiarity [18]. During the cultivation of trust, familiarity is imperative since trust is only possible within the familiar world [19]. The relationship between familiarity and trust is best that in or devil when they behave in accordance to trust positive expectations about them. [20]. Experimental surveys also show that familiarity of trustee significantly affects online trust as it determines behavioral intentions of the client to enquire and buy the product online [20].
This paper focuses on a brief review of current trust estimation techniques in human-agent societies and the development of trust quantification mechanisms using multi-criteria decision-making. Preceding sections of the paper are organized as follows. Section II provides a brief background on trust theory, focuses on the previous attempts made to develop a trustworthy human-agent relationship and the applications of MCDM in problem solutions relating specifically to cognitive phenomena. Sect. 3, introduces the proposed fuzzy MCDM based model of trust. The fuzzy inference approach as a structural mechanism for trust decision-making is also cast-off in section 3. In order to demonstrate the process of choosing a trust level for the collaborator, the proposed approaches were empirically evaluated and compared in section 4. Section 5 discusses the results of the proposed trust quantification system. Finally, section 6 concludes the current work.

II. Literature Review
Enabling the agent to establish interactions with the human considering a similar level of complexity and multidimensionality has been one of the challenges of contemporary human-agent interaction. The objective has been entertained by an interdisciplinary approach to develop robotic agents capable to establish a trustworthy relationship with their teammates [21]. [22] have simulated human decision-making in robotic agents using developmental theories and from this perspective, the authors tried to highlight the process involved in the establishment of the relationship between human and agent to understand agent response to human behavior under relational context [23,24]. Trust is dynamic development based on nature of interaction and is subjected to variations operationalized in the study [25], the study of trust is conducted in three phases: trust acquisition, trust loss, and trust restoration. In psychology, trust is described as "a psychological attitude that is multidimensional in nature involves belief and expectation about the trustee's reliability resulted from social experiences including uncertainty and risk" [26,27]. Trust for unknown people can be envisioned by passively witnessing their behaviors with consequences on our own decisions [28]. Trust has a multidimensional nature that can be built on either objective factors or emotional, irrational attitudes towards the partner [29] emotional trust is considered as independent of objective information under total on a certain situation where the trusted partner is not evaluated on objective elements. Therefore in certain situations, the trustee is always accurate until proven otherwise. Emotional trust is successively built during the constant endorsement of trustee's reliability through expected responses. [30]. According to this perspective confirmation of trustor's choices reflect the level of trust acquisition and acceptance as trustworthy [31,32] highlight the importance of the construction of interpersonal trust while developing new relationships. Previous relational histories also shape human trust relationships originating with primary caregivers proceeding to the significant effective relationship [33]. Sometimes under uncertain situations, trustor's decision to place trust in the case of an unfamiliar person depends on the trustee's general attachment [34][35][36]. Similarly, an individual's cognitive capability is important to be developed, especially for the trustee's epistemic reliability. One can reason about the perspective of others through his cognitive skills. In this regard theory of mind, development enables an individual to conceptualize the mental state of another [37].
Relative to human-agent interactions, different investigations have been made through studies under trust in agent or system involving adult participants, these studies were based on either explicit measurement (selfreporting) or implicit trust measurement [38]. Explicit measurements of trust were subject to the idiosyncratic attitude of human which is usually based on beliefs and not on actual interaction experience, whereas implicit measurement of trust generally enrolled hypothesis postulation based on specific environmental and theoretical conditions [39].

Evidence Theory-Based Trust Model
Trust is considered as a concept describing the dependability and reliability of agents in collaborative environments that develops a sense of improving quality of collaborative interactions [40]. Trust assessment models have been categorized and studied in four major domains: 1. As logical models where an agents develops trust relationship based on mathematical logic 2. Social cognitive models, taking inspiration from human psychology to develop and foster trust relationship by assessing trustworthiness of the trustee.
3. Organizational models that apprehend trust through personal relationships in a system 4. Numerical models developing trust on mathematical probabilities [41,42].
The work in this paper implements trust assessment based on social cognitive and numerical models where the trustworthiness of human is assessed on numerical modeling by collecting human's information as personality traits as potential information of trustee. Such sort of trust assessment falls under direct trust [43]. Various methodologies have been employed that collect information under numerical models, among them one effective methodology is Theory of Evidence that have grounds in belief functions or Dempster-Shafer theory (DST) [43,44] where collaborative agents develop basic probability assignment (BPAs) representing source of information from other agent. Numerous approaches have been found in trust assessment among collaborative agents where DST has been hired [45] is utilized to implement distributed management in electronic commerce. The method may be based on both direct and indirect reputation where the need of indirect trust is faded out when direct trust is obtained. In the meantime, direct application of Dempster's combination rule is used to integrate materials. Virtual temporary system implementing swift trust based on DST has also been observed in the literature [46]. Evidence base methods have special tendency in trust transitivity in describing relationships considering uncertainty by developing transition model considering trust features and relationship types [47]. Authors in [48] used DST to handle network security problem in wireless sensor networks. Trust modeling based on evidential theory has both advantages and disadvantages. When generating BPAs the characteristic of vanishing evidence reliability is not well emphasized, also for conflicting evidences, evidence based theories are not directly applicable. In recent attempts, entropy based models have been proposed to handle conflicting evidences in multi agent collaborative systems [49,50]. It has been observed in data fusion models that assigned weights are directly proportional to entropy of evidence [49,51,52].

Motivation of the reasearch
Previous works have identified the influencing factors inspiring the trust processherein termed as antecedents of trust. To the best of our knowledge, for trust decisions, no attempt has been made to consider trustworthiness perceived and the familiarity of the trustee as trust antecedents. Therefore, the current research considers two very important trust antecedents, each having support from previous research. The trust antecedents deliberated in this research include personality traits oriented trustworthiness [47] and familiarity of trustee [53], to quantify the trust level of human collaborator.

EFMCDM based trust assessment model
The agent has been designed and developed to make a trust decision in accordance with two parameters; the trustworthiness of the human collaborator and the level of familiarity the agent have developed towards him. The criteria to make a final decision regarding trust are implemented with the help of evidential fuzzy multicriteria decision-making (EFMCDM) [54]. Multi-criteria decision making appears to be one of the widely used decision making methodologies. The purposed method for final trust estimation uses a novel approach to MCDM with a flavor of evidential fuzziness, evidential fuzzy multi-criteria decision making EFMCDM integrating multi criteria decision making with Dempster Shafer's theory with belief entropy. Figure 1 gives the details of the method adopted in EFMCDM technique. Each criterion is modeled as evidence alternative constructing the frame of discernment. EFMCDM generates suitable basic probability assignments (BPAs) to the criteria by considering both subjective and objective weights assignment to criteria. These alternatives are rank to determine optimal alternatives. EFMCDM is capable of modeling uncertainty helpful in decreasing uncertainty resulted as subjective human cognition thereby improving decision making.

Construction of Decision Makers
Evaluation of Criteria Importance Formulation of Alternatives

Fuzzy Inference based trust assessment model
Zadeh [55], 1965 introduced the fuzzy set theory that transforms linguistic variables to discrete numerical variables during the decision making process. The lack of diffusion in the allocation of importance weights of criteria and ratings of alternative based on evaluation criteria has overcome with the definition of fuzzy set, developed into EFMCDM. The EFMCDM problems is adopted to measure the trust perception of artificial agent towards human and follows the procedure elaborated in [54]. Let r 1 , r 2, , r 3 , r 4 ∈ R and r 1 , < r 2, ≤ r 3 , < r 4 ; a trapezoidal fuzzy number is defined as = r 1 , r 2, , r 3 , r 4 and its membership function as: x ∈ [r 2 , r 3 ] r 4 − x r 4 − r 3 x ∈ [r 3 , r 4 ] 0 otherwise Definition 3: Dempster-Shafer theory (DST) [57,58] Dempster Shafer of the theory of evidence and belief estimation is extensively being used in various application tools due to the flexibility and efficiency in uncertainty modeling. DST uses mass functions specifically modeled by complex numbers, called complex basic belief assignment that carries ability to express uncertain information.

Problem Statement:
An agent's trust towards human is classified under levels, where each level (rangine from the lowest trust " 1 " to the very high trust level " 7 ") describes the extent to which agent computes its trust towards human.

D1
T HF

D3
T HF  Decision matrix for fuzzy importance weights for one of the possible combination of criteria values given by the decision makers is provided under table -5. In the meantime decision matrices for initial trust level rating for one of the possible combinations with respect to decision makers is shown in table -6 and processed with fuzzy weights during successive steps.              The uncertainty degree calculation for the criterion (τ and f) gives, Similarly, normalization of uncertainty degree of criteria gives, The BPAs of the Trust T i (i = 1,2, … ,7) and θ concerning the criterion C j (j = 1,2) as shown in the table.
The finalized order ranking of trust level based on beliefs of the criterion (τ, f) is shown in table 16. The optimal decision choice is 7 which is that depicts the strongest belief of agent towards trusting the human. The EFMCDM generates the following ranking order of the alternatives as follows and selects the optimal alternative T 7 .    Here, composition of fuzzy proposition is constructed of atomic fuzzy propositions using the connectives "and". The following fuzzy propositions hold for "τ" and "f": FP1 = (τ is "Highly Deceptive" and f is "Familiar") Moreover the t-norm function for layer-1 is defined as: t: Eq. (1) transforms the membership functions of fuzzy sets "τ" and "f" among membership function of the intersection of "τ" and "f" that is: Eq.
(2) can be written in terms of t-norm as: ... (1) Few rules for the fuzzy inference system are provided as under.

IF (Trustworthiness is "Very Trustworthy" and
is "Highly Familiar") THEN Trust_Level is "Very High" These fuzzy IF-THEN rules are interpreted as a fuzzy relation Q32 with the membership function are written as: 32 (τ, ) = min [ 1 (τ), 2

( )]
Fuzzy IF-THEN rules are the constituents of the fuzzy rule base. The fuzzy rule base is the major component of the fuzzy system because all other components are used to implement these rules realistically and proficiently. Fuzzy rule base comprises the following fuzzy IF-THEN rules, where rules for layer 1 are denoted by r e where, 1 ≤ e ≤ 32: Highly Deceptive" and f is "Familiar") THEN Trust_Level is "Very Low" r 2 = IF (τ is "Trustworthy" and f is "Partially Familiar") THEN Trust_Level is "High" . . . r 32 = IF (τ is "Very Trustworthy" and f is "Highly Familiar") THEN Trust_Level is "Very High" Ru e and ru f represents any fuzzy IF-THEN rule, then r e = τ e × f e → T e Then μ τ∩ f (τ, f) = μ τ (τ) ∩ μ f (f) Accepting the first view of a set of rules, the rules are interpreted as a single fuzzy relation Q 32 The combination in equation 8 is called the Mamdani combination. Let "in" and "out" be arbitrary fuzzy sets and be the input and output to the fuzzy inference Engine respectively. Then, by viewing Q 32 as a single fuzzy IF-THEN rule and using the generalized modus ponens [67], we obtain the output of the fuzzy inference engine as μ VL∩L∩ML∩M∩MH∩VH (out) = sup I∈(τ,f) t[μ I (τ, f), μ Q 32 (τ, f, T)] Here, Y 1 = μ out (VL, L, ML, M, MH, VH) Mamdani composition based inference is used here we obtain the product inference engine as μ out (TrustLevel) The center of gravity defuzzifier specifies the o* as the center of the area covered by the membership function out, that is, o * = ∫ outμ out (out)dout ∫ μ out (out)dout The crisp output values for the trustworthiness dimensions are calculated in eq. 26, provided the fuzzy set of familiarity and trustworthiness of human collaborator.

IV. Result and Discussion
The ranking order of the trust level alternatives obtained by EFMCDM method chooses the optimal alternative 7 that depicts "Very High" trust level quantified by the agent towards human. The ranking is further elaborated with respect to decision makers. The subjective belief according to the criteria for trustworthiness and familiarity, given in the following table, the initial weights assignment by decision maker D 1 is "Highly Trustworthy" and "Highly Familiar" and for D 2 it is "Trustworthy" and "Highly Familiar", generating a belief to have a Very High trust level. Similarly with D 3 's allocations for trustworthiness and familiarity the belief is produced towards human being for being "High". The general belief or the objective belief of all three decision makers towards a particular human being has been quantified as "Very High", which coincides with the subjective believes of the decision makers.

V. Conclusion
Human agent collaborative environments are getting more complex and demanding. Both humans and agents are often oriented towards subjective goals and may act maliciously. Agents are required to quantify trust towards humans in the same way human do; hence they are required to possess capabilities and sophisticated decision making to develop trust assessments towards human teammate. Trust quantification is increasingly important to address the issue of mutual understanding of intentions between humans and agents to achieve a common goal. The current work proposed a new formulation of trust based on the principles of evidential fuzzy multi criteria decision making (EFMCDM) approach and introduced a fuzzy inference method in order to evaluate and score among human's trust levels. Evidential Fuzzy Multi criteria decision making has advantage in trust quantification. Since evidential method for fuzzy MCDM is based on integration of Dempster-Shafer theory with belief entropy. EFMCDM method not only considers the subjective weights measured by belief entropy, utilized to obtain the BPAs of criteria. The results are compared and are found to be consistent with those of fuzzy inference system. In future we plan to incorporate more trust antecedents and factors influencing trust mechanism and implementation through more robust techniques of deep learning.