Document Type
Conference Proceeding
Publication Date
8-2009
Abstract
Cooperative agent systems often do not account for sneaky agents who are willing to cooperate when the stakes are low and take selfish, greedy actions when the rewards rise. Trust modeling often focuses on identifying the appropriate trust level for the other agents in the environment and then using these levels to determine how to interact with each agent. Adding trust to an interactive partially observable Markov decision process (I-POMDP) allows trust levels to be continuously monitored and corrected enabling agents to make better decisions. The addition of trust modeling increases the decision process calculations, and solves more complex trust problems that are representative of the human world. The modified I-POMDP reward function and belief models can be used to accurately track the trust levels of agents with hidden agendas. Testing demonstrates that agents quickly identify the hidden trust levels to mitigate the impact of a deceitful agent.
Source Publication
2009 IEEE International Conference on Privacy, Security, Risk and Trust (PASSAT-09), 2009, pp. 109-116.
Recommended Citation
R. Seymour and G. L. Peterson, "A Trust-Based Multiagent System," 2009 International Conference on Computational Science and Engineering, Vancouver, BC, Canada, 2009, pp. 109-116, doi: 10.1109/CSE.2009.297.
Comments
© 2009 IEEE. All rights reserved. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
AFIT Scholar furnishes the accepted version of this conference paper. The published version of record is available from IEEE via subscription at the DOI link on this page.