Loading [a11y]/accessibility-menu.js
Sustainable DDPG-Based Path Tracking for Connected Autonomous Electric Vehicles in Extra-Urban Scenarios | IEEE Journals & Magazine | IEEE Xplore

Sustainable DDPG-Based Path Tracking for Connected Autonomous Electric Vehicles in Extra-Urban Scenarios


Abstract:

This paper addresses the path-tracking control problem for Connected Autonomous Electric Vehicles (CAEVs) moving in a smart Cooperative Connected Automated Mobility (CCAM...Show More

Abstract:

This paper addresses the path-tracking control problem for Connected Autonomous Electric Vehicles (CAEVs) moving in a smart Cooperative Connected Automated Mobility (CCAM) environment, where a smart infrastructure suggests the reference behaviour to achieve. To solve this problem, a novel energy-oriented Deep Deterministic Policy Gradient (DDPG) control strategy, able to guarantee the optimal tracking of the suggested path while minimizing the CAEVs energy consumption, is proposed. To this aim, the power autonomy, the battery state of charge (SOC), the overall power train model -comprehensive of the electric motor equations, inverter dynamics and the battery pack model- is embedded within the training process of the DDPG agent, hence letting the CAEV to travel according to the best sustainable driving policy. The training procedure and the validation phase of the proposed control method is performed via an own-made advanced simulation platform which, combining Matlab & Simulink with Python environment, allows the virtualization of real driving scenarios. Specifically, the training process confirms the capability of DDPG agent in learning the safe eco-driving policy, while, the numerical validation, tailored for the realistic extra-urban scenario located in Naples, Italy, discloses the capability of the DDPG-based eco-driving controller in solving the appraised CCAM control problem despite presence of external disturbances. Finally, a robustness analysis of the proposed strategy in ensuring the ecological path tracking control problem for different CAEV models and driving path scenarios, along with a comparison analysis with respect model-based controls, is provided to better highlights the benefits/advantages of the proposed Deep Reinforcement Learning (DRL) control.
Published in: IEEE Transactions on Industry Applications ( Volume: 60, Issue: 6, Nov.-Dec. 2024)
Page(s): 9237 - 9250
Date of Publication: 16 August 2024

ISSN Information:


I. Introduction

Thanks to their ability in enhancing road safety while decreasing road congestion, autonomous vehicles can bring huge benefits to the automotive industry [1], [2], as well as improving energy saving when considering electrical vehicle (EV) [3]. According to the CCAM paradigm, energy-saving performance could be further improved thanks to the vehicle-to-everything (V2X) communication technology, which allows information sharing with smart infrastructure and other vehicles, thereby enabling access to previously unavailable surrounding road traffic environment information [4]. This brings traffic management to an entirely new level and contributes to sustainable mobility, i.e. the shared information allows the optimization of vehicles control motion in a completely sustainable manner [5]. Within this framework, research interests in designing eco-driving control strategies for Connected Autonomous Vehicles (CAVs), as well as Connected Autonomous Electric Vehicles (CAEVs), have been increased [6]. A first attempt is presented in [7] where, without taking into account the power train dynamics, a Model Predictive Controller (MPC) is adopted for the designing of an Adaptive Cruise Control (ACC) able to ensure fuel economy and robustness to external disturbances in urban scenarios.

Contact IEEE to Subscribe

References

References is not available for this document.