Skip to Main Content
The pheromone-based parameterized probabilistic model for the ACO algorithm is presented as the construction graph that the combinatorial optimization problem can be mapped on. Based on the construction graph, the solution construction procedure and update rule of pheromone model in the ACO algorithm is illustrated. The finite deterministic Markov decision process corresponding to the solution construction procedure is illustrated in the terminology of reinforcement learning (RL) theory. The ACO algorithms are fitted into the framework of generalized policy iteration (GPl) in RL based on incomplete information of the Markov state. Furthermore, we show that the pheromone update in the ACS and Ant-Q algorithm is based on the MC methods or some formalistic combination of MC methods and TD methods. TD methods have usually been found to converge faster than MC methods in many applications, but works worse than the MC method in the non-Markov environment. We propose a novel ACO algorithm, Ant(λ) algorithm, which introduces the eligibility trace mechanism into the local update procedure of pheromone, the algorithm unifies the TD method and MC method mathematically, and in the algorithm, the delayed reinforcement can be back propagated in time.