Skip to Main Content
In this paper we address a dynamic distributed patrolling problem where a team of autonomous unmanned aerial vehicles (UAVs) patrolling moving targets over a large area must coordinate. We propose a hybrid approach combining multi-agent geosimulation and reinforcement learning enabling a group of agents to find near optimal solutions in realistic geo-referenced virtual environments. We present the COLMAS system which implements the proposed approach and show how a set of UAV can automatically find patrolling patterns in a dynamic environment characterized by unknown obstacles and moving targets. We also comment the value of the approach based on limited computational results.