Skip to Main Content
In a Chip Multiprocessor(CMP) with shared caches, the last level cache (LLC) is distributed across all the cores. This increases the on-chip communication delay and thus influence the pr ocessor's performance. The LLC is also quite inefficient due to plenty of dead blocks. Replication can be provided in shared caches by replicating cache blocks evicted from cores to the local LLC slices to minimize access latency through utilizing the cache space of dead blocks which will not be referenced again before they are evicted. However, naively allowing all evicted blocks to be replicated have limited performance benefit as such replicating does not take into account reuse probability of replicated blocks. This paper proposes Adaptive Probability Replication (APR), a mechanism that counts each block's accesses in L2 cache slices, and monitors the number of evicted blocks with different number of accesses, to estimate the Re-Reference Probability of blocks in their lifetime at runtime. Using predicted re-reference probability, APR adopts probability replication policy and probability insertion policy to replicate blocks at corresponding probabilities, and insert them at appropriate position, according to their re-reference probability. We evaluate APR for a 16-core tiled CMP using splash-2 and parsec benchmarks. APR improves performance by 21% on average compared to conventional shared cache design, by 17% over Victim Replication (VR), by 10% over Adaptive Selective Replication (ASR), and by 15% over Reactive NUCA (R-NUCA). The additional hardware cost of APR is well under 1% of L2 cache slice.
Date of Conference: 18-21 Dec. 2011