Document Type

Conference Proceeding

Publication Date

9-2005

Abstract

Gradient descent learning algorithms have proven effective in solving mixed strategy games. The policy hill climbing (PHC) variants of WoLF (Win or Learn Fast) and PDWoLF (Policy Dynamics based WoLF) have both shown rapid convergence to equilibrium solutions by increasing the accuracy of their gradient parameters over standard Q-learning. Likewise, cooperative learning techniques using weighted strategy sharing (WSS) and expertness measurements improve agent performance when multiple agents are solving a common goal. By combining these cooperative techniques with fast gradient descent learning, an agent’s performance converges to a solution at an even faster rate. This statement is verified in a stochastic grid world environment using a limited visibility hunter-prey model with random and intelligent prey. Among five different expertness measurements, cooperative learning using each PHC algorithm converges faster than independent learning when agents strictly learn from better performing agents.

Comments

AFIT Scholar furnishes a draft of the conference paper. The published final version of record is available from ACTA press in the proceedings of ASC 2005, or as an individual paper.

Source Publication

9th IASTED International Conference on Artificial Intelligence and Soft Computing (ASC 2005)

Share

COinS