Skip to Main Content
An autonomous agent had a ranged view of the absolute coordinate system, where it can receive accurate information in a range but noting out of the range. This is a considerably artificial situation. In this paper, we propose a staged view in distance and direction of the relative coordinate system, where an agent receives accurate information in neighborhood but only rough information in short and middle-distance areas. It reflects a human's view that we can see easily an object in the neighborhood but more difficult as distance becomes larger and we can see easily an object in the center direction but more difficult in the righter and lefter directions. We show by a numerical experiment for the pursuit problem, a multi-agent's benchmark problem, that the agent with the staged view learns effectively using Q-learning.