Skip to Main Content
Reinforcement learning has proved its value in solving complex optimization tasks. However, the learning time for even simple problems is typically very long. Efficient exploration of the state-action space is therefore crucial for effective learning. This paper introduces a new type of exploration, called dynamic exploration. It differs from the existing exploration methods (both directed and undirected) in that it makes exploration a function of the action selected in the previous time step. In our approach, states can either belong to long-path states, where the optimal action is the same as the optimal action in the previous state, or to switch states, where the action is different. In realistic learning problems, the number of long-path states exceeds the number of switch states. Given this information, the exploration method can explore the state-space more efficiently. Experiments on different gridworld optimization tasks demonstrate the reduction of learning time with dynamic exploration.