By Topic

RAID5 performance with distributed sparing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Thomasian, A. ; IBM Thomas J. Watson Res. Center, Yorktown Heights, NY, USA ; Menon, J.

Distributed sparing is a method to improve the performance of RAID5 disk arrays with respect to a dedicated sparing system with N+2 disks (including the spare disk), since it utilizes the bandwidth of all N+2 disks. We analyze the performance of RAID5 with distributed sparing in normal mode, degraded mode, and rebuild mode in an OLTP environment, which implies small reads and writes. The analysis in normal mode uses an M/G/1 queuing model, which takes into account the components of disk service time. In degraded mode, a low-cost approximate method is developed to estimate the mean response time of fork-join requests resulting from accesses to recreate lost data on the failed disk. Rebuild mode performance is analyzed by considering an M/G/1 vacationing server model with multiple vacations of different types to take into account differences in processing requirements for reading the first and subsequent tracks. An iterative solution method is used to estimate the mean response time of disk requests, as well as the time to read each disk, which is shown to be quite accurate through validation against simulation results. We next compare RAID5 performance in a system (1) without a cache; (2) with a cache; and (3) with a nonvolatile storage (NVS) cache. The last configuration, in addition to improved read response time due to cache hits, provides a fast-write capability, such that dirty blocks can be destaged asynchronously and at a lower priority than read requests, resulting in an improvement in read response time. The small write penalty is also reduced due to the possibility of repeated writes to dirty blocks in the cache and by taking advantage of disk geometry to efficiently destage multiple blocks at a time

Published in:

Parallel and Distributed Systems, IEEE Transactions on  (Volume:8 ,  Issue: 6 )