Loading [MathJax]/extensions/MathMenu.js
An Adaptive Federated Reinforcement Learning Framework with Proximal Policy Optimization for Autonomous Driving | IEEE Conference Publication | IEEE Xplore

An Adaptive Federated Reinforcement Learning Framework with Proximal Policy Optimization for Autonomous Driving


Abstract:

Reinforcement learning (RL) is widely used in autonomous driving tasks. The approach is well-suited for autonomous driving tasks because the environment is constantly cha...Show More

Abstract:

Reinforcement learning (RL) is widely used in autonomous driving tasks. The approach is well-suited for autonomous driving tasks because the environment is constantly changing and unpredictable. Autonomous vehicles adapt to changes and make safe and efficient decisions in real time. There has been a growing interest in using RL to develop autonomous vehicles that operate in complex and challenging environments, such as city streets and highways. To overcome the limit of the centralized model which needs more data transmission delay and computational complexity, we incorporated the proximal policy optimization (PPO) algorithm into every agent on simulation and integrated the federated learning (FL) for multiple clients to train with shared model weights. The model reduced the amount of data needed to train a model by sharing data from multiple vehicles in different scenarios for autonomous driving. In this study, we combined the different adaptive optimization with FL aggregation techniques on autonomous driving to implement and compare their performance. The experimental results showed that faster-accelerated coverage than local training, and FedYogi was significantly better than the other baselines.
Date of Conference: 27-29 October 2023
Date Added to IEEE Xplore: 12 January 2024
ISBN Information:
Conference Location: Yunlin, Taiwan

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.