Loading [MathJax]/extensions/MathMenu.js
Distributed Resource Autoscaling in Kubernetes Edge Clusters | IEEE Conference Publication | IEEE Xplore

Abstract:

Maximizing the performance of modern applications requires timely resource management of the virtualized resources. However, proactively deploying resources for meeting s...Show More

Abstract:

Maximizing the performance of modern applications requires timely resource management of the virtualized resources. However, proactively deploying resources for meeting specific application requirements subject to a dynamic workload profile of incoming requests is extremely challenging. To this end, the fundamental problems of task scheduling and resource autoscaling must be jointly addressed. This paper presents a scalable architecture compatible with the decentralized nature of Kubernetes [1], to solve both. Exploiting the stability guarantees of a novel AIMD-like task scheduling solution, we dynamically redirect the incoming requests towards the containerized application. To cope with dynamic workloads, a prediction mechanism allows us to estimate the number of incoming requests. Additionally, a Machine Learning-based (ML) Application Profiling Modeling is introduced to address the scaling, by co-designing the theoretically-computed service rates obtained from the AIMD algorithm with the current performance metrics. The proposed solution is compared with the state-of-the-art autoscaling techniques under a realistic dataset in a small edge infrastructure and the trade-off between resource utilization and QoS violations are analyzed. Our solution provides better resource utilization by reducing CPU cores by 8% with only an acceptable increase in QoS violations.
Date of Conference: 31 October 2022 - 04 November 2022
Date Added to IEEE Xplore: 02 December 2022
ISBN Information:

ISSN Information:

Conference Location: Thessaloniki, Greece

Contact IEEE to Subscribe

References

References is not available for this document.