Skip to Main Content
The increasing speed gap between the processor and memory is usually the critical bottleneck in achieving high performance. Hardware caches, programming models, algorithms and data structures have been introduced and proposed to exploit localities on reducing the memory overhead. Some of these new designs share a common load and compute style in which the algorithm first moves all needed data to cache and then performs operations only on the ready data. In this paper, we introduce a locality function to model the reuse ability of an algorithm and propose a corresponding performance model. Then we theoretically analyze how to utilize and design on cache under our model: (1) We present theorems to give the optimal cache partition scheme for the software buffering technique targeting at hiding the memory overhead. (2) We provide methods to decide the optimal multicore design to maximally leverage benefits of both the shared and private caches. (3) We incorporate the memory overhead into the Amdahl's Law to study the speedup limitation on memory bandwidth.