Abstract:
Avoiding collision between two robot arms in a common workspace is non-trivial, since each arm acts as a dynamic obstacle for the other one. In this context, Motion Path ...Show MoreMetadata
Abstract:
Avoiding collision between two robot arms in a common workspace is non-trivial, since each arm acts as a dynamic obstacle for the other one. In this context, Motion Path Planning (MPP) is the process of finding an optimal and collision-free track that a robot/robot arm can follow to get to the target position starting from any point in its workspace. We propose a reinforcement learning approach to MPP for two manipulators, the first one of which tries to avoid collision with the second one. Initially, the first manipulator has no knowledge about the environment, but it successfully learns optimal collision-free paths through a Team Q-learning algorithm. We present experiments using two different methods for state discretization, namely General State (GS) Discretization and Tile Coding (TC) Discretization, as well as two different Q-learning methods, namely single-agent (SA) and multi-agent (MA) approaches.
Date of Conference: 11-14 October 2020
Date Added to IEEE Xplore: 14 December 2020
ISBN Information: