Loading [MathJax]/extensions/MathMenu.js
Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning | IEEE Journals & Magazine | IEEE Xplore

Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning


Proposed task scheduler architecture

Abstract:

In healthcare, real-time decision making is crucial for ensuring timely and accurate patient care. However, traditional computing infrastructures, with their wide ranging...Show More

Abstract:

In healthcare, real-time decision making is crucial for ensuring timely and accurate patient care. However, traditional computing infrastructures, with their wide ranging capabilities, suffer from inherent latency, which compromises the efficiency of time-sensitive medical applications. This paper explores the potential of fog computing to better address this challenge, proposing a new framework that uses deep reinforcement learning (DRL) to advance task scheduling in crucial healthcare. The paper addresses the limitations of cloud computing systems. It proposes and replaces a fog computing architecture in supporting low latency for healthcare applications. This architecture reduces transmission latency by placing processing nodes close to the source of data generation, namely IoT-enabled healthcare devices. The foundation of this approach is the DRL model, which is designed to dynamically optimize the partition of computational tasks across fog nodes to improve both data throughput and operational response times. The effectiveness of the proposed DRL based fog computing model is validated with a series of simulations performed with the SimPy simulation environment. In such simulations, diverse healthcare scenarios, ranging from continuous patient monitoring systems to crucial emergency response applications, are recreated, providing a rich framework for testing the real-time processing capabilities of the model. This algorithm, DRL, has been fine-tuned and extensively implemented in these scenarios to show how the algorithm controls and optimizes tasks and their urgency in accordance with resource demand. By dynamically learning from real-time system states and optimizing task allocation to minimize delays, the DRL model reduces the makespan by up to 30% compared to traditional scheduling approaches. Comparative performance analysis indicated a 30% reduction in task completion times, a 40% reduction in operational latency, and a 25% improvement in fault tolerance r...
Proposed task scheduler architecture
Published in: IEEE Access ( Volume: 13)
Page(s): 26542 - 26563
Date of Publication: 05 February 2025
Electronic ISSN: 2169-3536

References

References is not available for this document.