By Topic

An approach to comprehensive trust management in multi-agent systems with credibility

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Babak Khosravifar ; Concordia Institute for Information Systems Engineering, Concordia University, Canada ; Jamal Bentahar ; Maziar Gomrokchi ; Rafy Alam

Security is a substantial concept in multi-agent systems where agents dynamically enter and leave the system. Different models of trust have been proposed to assist agents in deciding whether to interact with requesters who are not known (or not very well known) by the service provider. To this end, in this paper we progress our work on security for agent-based systems, which is embedded in service providerpsilas trust evaluation of the counter part. Agents are autonomous software equipped with advanced communication (using public dialogue game-based protocols and private strategies on how to use these protocols) and reasoning capabilities. The service provider agent obtains reports provided by trustworthy agents (regarding to direct interaction histories) and referee agents (in the form of recommendations) and combines a number of measurements, such as number of interactions and timely relevance, to provide an overall estimation of a particular agentpsilas likely behavior. Requesting this agent, called the target agent, to provide the number of interactions it had with each agent, the service provider penalizes the agents who lied about having information for trust evaluation process. In addition, after a periodic time, the actual behavior of the target agent is compared against the information provided by others. This comparison leads to both adjusting the credibility of the contributing agents in trust evaluation and improving the system trust evaluation by minimizing the estimation error. Overall the proposed framework is shown to assist agents effectively perform the trust estimation of interacting agents.

Published in:

2008 Second International Conference on Research Challenges in Information Science

Date of Conference:

3-6 June 2008