Abstract:
Distributed in-memory computing frameworks usually have lots of parameters (e.g., the buffer size of shuffle) to form a configuration for each execution. A well-tuned con...Show MoreMetadata
Abstract:
Distributed in-memory computing frameworks usually have lots of parameters (e.g., the buffer size of shuffle) to form a configuration for each execution. A well-tuned configuration can bring large improvements of performance. However, to improve resource utilization, jobs are often share the same cluster, which causes dynamic cluster load conditions. According to our observation, the variation of cluster load reduces effectiveness of configuration tuning. Besides, as a common problem of cluster computing jobs, overestimation of resources also occurs during configuration tuning. It is challenging to efficiently find the optimal configuration in a shared cluster with the consideration of memory-sparing. In this article, we introduce MespaConfig, a job-level configuration optimizer for distributed in-memory computing jobs. Advancements of MespaConfig over previous work are features including memory-sparing and load-sensitive. We evaluate MespaConfig by 6 typical Spark programs under different load conditions. The evaluation results show that MespaConfig improves the performance of six typical programs by up to 12× compared with default configurations. MespaConfig also achieves at most 41 percent reduction of configuration memory usage and reduces the optimization time overhead by 10.8× compared with the state-of-the-art approach.
Published in: IEEE Transactions on Services Computing ( Volume: 15, Issue: 5, 01 Sept.-Oct. 2022)