Skip to Main Content
With general purpose programmable GPUs becoming more and more popular, automated tools are needed to bridge the gap between achievable performance from highly parallel architectures and the performance required in applications. In this paper, we concentrate on improving GPU memory management for applications with large and intermediate data sets that do not completely fit in GPU memory. For such applications, the movement of the extra data to CPU memory must be carefully managed. In particular, we focus on solving the joint task scheduling and data transfer scheduling problem posed in (N. Sundaram et al., May 2009), and propose an algorithm that gives close to optimal results (as measured by running simulated annealing overnight) in terms of the amount of data transferred for image processing benchmarks such as edge detection and convolutional neural networks. Our results enable a reduction of up to 30Ã in the amount of data transfers compared to an unoptimized implementation. They are up to 2Ã better than the methods previously proposed in (N. Sundaram et al., May 2009) and less than 16% away from the optimal solution.