10.1109/CEC.2011.5949726">
 

Document Type

Conference Proceeding

Publication Date

6-2011

Abstract

Ant colony optimization (ACO) algorithms can generate quality solutions to combinatorial optimization problems. However, like many stochastic algorithms, the quality of solutions worsen as problem sizes grow. In an effort to increase performance, we added the variable step size off-policy hill-climbing algorithm called PDWoLF (Policy Dynamics Win or Learn Fast) to several ant colony algorithms: Ant System, Ant Colony System, Elitist-Ant System, Rank-based Ant System, and Max-Min Ant System. Easily integrated into each ACO algorithm, the PDWoLF component maintains a set of policies separate from the ant colony's pheromone. Similar to pheromone but with different update rules, the PDWoLF policies provide a second estimation of solution quality and guide the construction of solutions. Experiments on large traveling salesman problems (TSPs) show that incorporating PDWoLF with the aforementioned ACO algorithms that do not make use of local optimizations produces shorter tours than the ACO algorithms alone.

Comments

© 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

AFIT Scholar furnishes the accepted version of this conference paper. The published version of record is available from IEEE via subscription at the DOI link in the citation below.

Funding: This paper was supported by the Air Force Office of Scientific Research, project number 2311/FX

Source Publication

IEEE Congress on Evolutionary Computation 2011

Share

COinS