Skip to Main Content
Reinforcement Learning is a popular context of machine learning that aims at improving the behavior of autonomous agents that learn from interactions with the environment. However, it is often costly, time consuming, and even dangerous. To deal with these problems, reward shaping has been used as a powerful method to accelerate the learning speed of the agent. The principle idea is to incorporate a numerical feedback, other than environment reward, for the learning agent. However, finding an efficient potential function to shape the reward is still an interesting area of research. In this paper, a new algorithm has been proposed that receives the environment graph, performs some new analysis, and provides the extracted information for the learning agent to accelerate the speed of learning. This information includes sub goals, bad states, and sub environments with different exploration, or reward, values. To evaluate this algorithm an experimental study has been conducted on two benchmark environments, Six Rooms and Maze. The obtained results demonstrate the effectiveness of the proposed algorithm.