Loading [MathJax]/extensions/MathMenu.js
Ordinal Position Based Nonlinear Normalization Method in Temporal-Difference Reinforced Learning | IEEE Conference Publication | IEEE Xplore

Ordinal Position Based Nonlinear Normalization Method in Temporal-Difference Reinforced Learning


Abstract:

In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very import...Show More

Abstract:

In the scenario of exo-atmospheric chasing game, a pursued vehicle needs to make an avoidance maneuver to evade the pursuit of a pursuing vehicle. Thus, it is very important to realize an intelligent recognition for the guidance behavior of the pursuing vehicle. Reinforced learning has the ability to achieve such an intelligent action. Among different approaches of reinforced learning, Temporal-Difference method uses a combinatory estimation from different temporal steps to determine the value function of an output policy, thus it statistically costs less training time than the Monte Carlo method. To use Temporal-Difference method to study the evader-pursuit problem, it is necessary to map a continuous state space into a limited number of discrete states. With the application of Temporal-Difference reinforced learning to the problem, an ordinal position based nonlinear normalization method is proposed to convert the continuous state vector and control vector into a discrete form, such that a new method called augmented Temporal-Difference reinforced learning method is created. Simulation results demonstrate the effectiveness of this augmented temporal difference method.
Date of Conference: 16-19 July 2021
Date Added to IEEE Xplore: 30 August 2021
ISBN Information:
Conference Location: Athens, Greece

Contact IEEE to Subscribe

References

References is not available for this document.