Skip to Main Content
It is almost evident that SRAM-based cache memories will be subject to a significant degree of parametric random defects if one wants to leverage the technology scaling to its full extent. Although strong multibit error-correcting codes (ECC) appear to be a natural choice to handle a large number of random defects, investigation of their applications in cache remains largely missing arguably because it is commonly believed that multibit ECC may incur prohibitive performance degradation and silicon/energy cost. By developing a cost-effective L2 cache architecture using multibit ECC, this paper attempts to show that, with appropriate cache architecture design, this common belief may not necessarily hold true for L2 cache. The basic idea is to supplement a conventional L2 cache core with several special-purpose small caches/buffers, which can greatly reduce the silicon cost and minimize the probability of explicitly executing multibit ECC decoding on the cache read critical path, and meanwhile, maintain soft error tolerance. Experiments show that, at the random defect density of 0.5 percent, this design approach can maintain almost the same instruction per cycle (IPC) performance over a wide spectrum of benchmarks compared with ideal defect-free L2 cache, while only incurring less than 3 percent of silicon area overhead and 36 percent power consumption overhead.