KeSCo: Compiler-based Kernel Scheduling for Multi-task GPU Applications | IEEE Conference Publication | IEEE Xplore

KeSCo: Compiler-based Kernel Scheduling for Multi-task GPU Applications


Abstract:

Nowadays, Graphics Processing Units (GPUs) dominate in a wide spectrum of computing realms and multi-task is increasingly applied in various complicated applications. To ...Show More

Abstract:

Nowadays, Graphics Processing Units (GPUs) dominate in a wide spectrum of computing realms and multi-task is increasingly applied in various complicated applications. To gain higher performance, multi-task programs require cumbersome programming efforts to take advantage of inter-kernel concurrency at source-code level. Although there exist works automatically scheduling kernels to enable inter-kernel concurrency, they all inevitably introduce new programming frameworks and some even bring significant performance downgrade compared to the expertise-based optimizations. To address this issue, we propose KeSCo, a compiler-based scheduler to expose kernel level concurrency in multi-task programs with trivial code modification. In compilation, KeSCo applies a strategy to schedule kernels in task queues, accounting for both load balance and synchronization cost. Also, KeSCo utilizes a customized algorithm designed for computational flow to remove redundant synchronizations. The design is further extended to support multi-process scenario, where multiple GPU processes are sharing a single context. Evaluations on representative benchmarks show that the proposed approach gains a 1.28× average speedup for multi-task scenario (1.22× for multi-process). Even with lessened programming efforts, our proposed design outperforms two state-of-the-arts GrSched and Taskflow by 1.31× and 1.16× on average, respectively.
Date of Conference: 06-08 November 2023
Date Added to IEEE Xplore: 22 December 2023
ISBN Information:

ISSN Information:

Conference Location: Washington, DC, USA

Funding Agency:


I. Introduction

In the last decade, Graphics Processing Units (GPUs) have been widely applied in a myriad of domains, owing to their excessive computation capability and high memory throughput. Advanced GPUs incorporate ample resources than what a typical monolithic GPU task or kernel necessitates and are thus frequently being underutilized, especially when executing single-task programs, which launch just one kernel at a time. To alleviate the under-utilization issue, a plethora of approaches have been proposed, like concurrently executing sliced kernels [1] and resource virtualization [2].

Contact IEEE to Subscribe

References

References is not available for this document.