Skip to Main Content
Recently there has been much research focus on the use of Reinforcement Learning (RL) algorithms for game agent control. However, although it has been shown that such agents are capable of learning in real time, the high dimensionality of agent sensor state spaces still prove to be a significant barrier to progress. This paper outlines an approach to dealing with this issue by using a modular RL architecture with a fine granularity of modules. The modular approach enables a reduction of the dimensionality in complex game-like environments by dividing the state space into smaller, more manageable sub tasks. While this approach is successful in reducing dimensionality, challenges with action selection, exploration and reward allocation arise. This paper discusses approaches to overcoming these issues.