Abstract:
Model-based next state prediction and state value prediction are slow to converge. To address these challenges, we do the following: i) Instead of a neural network, we do...Show MoreMetadata
Abstract:
Model-based next state prediction and state value prediction are slow to converge. To address these challenges, we do the following: i) Instead of a neural network, we do model-based planning using a parallel memory retrieval system (which we term the slow mechanism); ii) Instead of learning state values, we guide the agent's actions using goal-directed exploration, by using a neural network to choose the next action given the current state and the goal state (which we term the fast mechanism). The goal-directed exploration is trained online using self-supervised learning, via action selection given any start and goal state experienced in the trajectory obtained during hippocampal replay. Empirical studies show that our proposed method has a 91.9% solve rate across 100 episodes in a dynamically changing grid world, significantly outperforming state-of-the-art actor critic mechanisms such as PPO (61.2%), TRPO (26.1%) and A2C (23.9%), as well as replay buffer methods such as DQN (4.9%). Ablation studies demonstrate that both fast and slow mechanisms are crucial, and increasing both the depth and breadth of memory retrieval improves performance. We posit that the future of Reinforcement Learning (RL) will be to model goals and sub-goals for various tasks, and plan it out in a goal-directed memory-based approach.
Date of Conference: 09-11 November 2023
Date Added to IEEE Xplore: 25 December 2023
ISBN Information: