Skip to Main Content
The buffer cache plays an essential role in smoothing the gap between the upper level computational components and the lower level storage devices. A good buffer cache management scheme should be beneficial to not only the computational components, but also the storage components by reducing disk I/Os. Existing cache replacement algorithms are well optimized for disks in normal mode, but inefficient under faulty scenarios, such as a parity-based disk array with faulty disk(s). To address this issue, we propose a novel penalty-aware buffer cache replacement strategy, named Victim Disk(s) First (VDF) cache, to improve the reliability and performance of a storage system consisting of a buffer cache and disk arrays. VDF cache gives higher priority to cache the blocks on the faulty disks when the disk array fails, thus reducing the I/Os addressed directly to the faulty disks. To verify the effectiveness of the VDF cache, we have integrated VDF into the popular cache algorithms least frequently used (LFU) and least recently used (LRU), named VDF-LFU and VDF-LRU, respectively. We have conducted intensive simulations as well as a prototype implementation for disk arrays to tolerate one disk failure (RAID-5) and two disk failures (RAID-6). The simulation results have shown that VDF-LFU can reduce disk I/Os to surviving disks by up to 42.3 percent in RAID-5 and 50.7 percent in RAID-6, and VDF-LRU can reduce those by up to 36.2 percent in RAID-5 and 48.9 percent in RAID-6. Our measurement results also show that VDF-LFU can speed up the online recovery by up to 46.3 percent in RAID-5 and 47.2 percent in RAID-6 under spare-rebuilding mode, or improve the maximum system service rate by up to 47.7 percent in RAID-5 under degraded mode without a reconstruction workload. Similarly, VDF-LRU can speed up the online recovery by up to 34.6 percent in RAID-5 and 38.2 percent in RAID-6, or improve the system service rate by up to 28.4 percent in RAID-5.