Abstract:
Workload prediction is the key to improving the quality of service (QoS) of various user applications running in autonomous large-scale networks. Machine learning (ML) te...Show MoreMetadata
Abstract:
Workload prediction is the key to improving the quality of service (QoS) of various user applications running in autonomous large-scale networks. Machine learning (ML) technologies contribute to workload prediction because they provide fully automated data analytics. However, no ML algorithm can simultaneously provide perfect, timely, and generalized solutions at the same time. In this paper, we propose an adaptive optimized ML algorithm selection and installation (OMASI) method for workload prediction and predictive autoscaling to maintain the QoS of user applications. With OMASI, a central repository server collects information about the analytical objectives and optimization preferences of multiple distributively deployed ML engines and provides them with the best-suited ML algorithms for their data-analytics. We implemented and demonstrated OMASI on an experimental in-network computing testbed and evaluated the workload prediction accuracy of several available ML algorithms in terms of content downloading latency and throughput. The results show that OMASI can maintain the QoS of user applications, such as latency and throughput, by reducing the CPU usage prediction error by up to 86%.
Date of Conference: 06-10 May 2024
Date Added to IEEE Xplore: 02 July 2024
ISBN Information:
ISSN Information:
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Workload Prediction ,
- In-network Computing ,
- Prediction Accuracy ,
- Learning Algorithms ,
- Service Quality ,
- Prediction Error ,
- Objective Analysis ,
- Selection Algorithm ,
- Large-scale Networks ,
- Central Server ,
- Machine Learning Technology ,
- CPU Usage ,
- Root Mean Square Error ,
- Least-squares ,
- Objective Function ,
- Training Dataset ,
- Functional Networks ,
- Machine Learning Models ,
- Training Time ,
- Long Short-term Memory ,
- Support Vector Regression ,
- Machine Learning Repository ,
- Prediction Algorithms ,
- CPU Resources ,
- Baseline Scenario ,
- Microservices ,
- Local Database ,
- Registry Database ,
- Time Slot ,
- Dynamic Allocation
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Workload Prediction ,
- In-network Computing ,
- Prediction Accuracy ,
- Learning Algorithms ,
- Service Quality ,
- Prediction Error ,
- Objective Analysis ,
- Selection Algorithm ,
- Large-scale Networks ,
- Central Server ,
- Machine Learning Technology ,
- CPU Usage ,
- Root Mean Square Error ,
- Least-squares ,
- Objective Function ,
- Training Dataset ,
- Functional Networks ,
- Machine Learning Models ,
- Training Time ,
- Long Short-term Memory ,
- Support Vector Regression ,
- Machine Learning Repository ,
- Prediction Algorithms ,
- CPU Resources ,
- Baseline Scenario ,
- Microservices ,
- Local Database ,
- Registry Database ,
- Time Slot ,
- Dynamic Allocation
- Author Keywords