Abstract:
The rapid development of electric freight vehicles (EFVs) is driving the need for advanced management strategies, particularly given the dual demands of work scheduling a...Show MoreMetadata
Abstract:
The rapid development of electric freight vehicles (EFVs) is driving the need for advanced management strategies, particularly given the dual demands of work scheduling and charging requirements. Amid this backdrop, the significance of intelligent scheduling algorithms has heightened, especially in the era of autonomous driving. In this study, we introduce a novel reinforcement learning (RL) strategy - the multi actor-critic proximal policy optimization (MAC-PPO) for the management of three categories of EFVs. Our approach utilizes distinct actor-critic networks for each category of EFVs, thereby creating a comprehensive and structured RL framework that effectively tailors task scheduling and charging strategies for different types of EFVs. Real-world conditions are emulated through the incorporation of a time-varying electricity price in our experiments. Results indicate that our methodology effectively optimizes the balance between freight tasks and charging demands. With increasing training episodes, we observe about 54%, 58%, and 60% reductions in average customer employment expenditure, average customer waiting time, and average charging expenditure, respectively. These findings underscore the efficiency and practicality of our proposed strategy in EFV management, reinforcing the pivotal role of intelligent scheduling in the autonomous driving age.
Date of Conference: 24-28 September 2023
Date Added to IEEE Xplore: 13 February 2024
ISBN Information: