RSAC: A Robust Deep Reinforcement Learning Strategy for Dimensionality Perturbation | IEEE Journals & Magazine | IEEE Xplore

RSAC: A Robust Deep Reinforcement Learning Strategy for Dimensionality Perturbation


Abstract:

Artificial agents are used in autonomous systems such as autonomous vehicles, autonomous robotics, and autonomous drones to make predictions based on data generated by fu...Show More

Abstract:

Artificial agents are used in autonomous systems such as autonomous vehicles, autonomous robotics, and autonomous drones to make predictions based on data generated by fusing the values from many sources such as different sensors. Malfunctioning of sensors was noticed in the robotics domain. The correct observation from sensors corresponds to the true estimate of the dimension value of the state vector in deep reinforcement learning (DRL). Hence, noisy estimates from these sensors lead to dimensionality impairment in the state. DRL policies have shown to stagger its decision by the wrong choice of action in case of adversarial attack or modeling error. Hence, it is necessary to examine the effect of dimensionality perturbation on neural policy. In this regard, we analyze whether subtle dimensionality perturbation that occurs due to the noise in the source of input at the testing time distracts agent decisions. Also, we propose an RSAC (robust soft actor-critic) approach that uses a noisy state for prediction, and estimates target from nominal observation. We find that the injection of such noisy input during training will not hamper learning. We have done our simulation in the OpenAI gym MuJoCo (Walker2d-V2) environment and our empirical results demonstrate that the proposed approach competes for SAC’s performance and makes it robust to test time dimensionality perturbation.
Page(s): 1157 - 1166
Date of Publication: 05 April 2022
Electronic ISSN: 2471-285X

Contact IEEE to Subscribe

References

References is not available for this document.