![]() My Mac Pro runs 10-14 hours a day, so that’s a darn good track record.īut it’s always good to test, so I set about determining if I could produce even a single error from my Hitachi 2TB 7K2000 hard drives, reviewed here. I’ve been running a 4-way RAID 0 stripe for four years now, and the only failure I had was a Maxtor 500GB back in 2006 and that was a quick failure, a bad drive to start with. ![]() In fact, the unit could then continue to works for months or years, though it could not tolerate another drive failure. The article also misses a key point: a RAID 5 remains perfectly usable after a drive has failed. The article lacks any real-world substantiation, and uses statistics incorrectly to come to a flawed conclusion. It makes other tenuous assumptions about failure rates that don’t reflect my experience, or even how most people use their computers. The article claims that RAID 5 is fatally flawed because a rebuild after a drive failure would be all but guaranteed to fail from a read error (in the context of 2TB drives). In 2009, an article by Robin Harris appeared on ZDNet ( Why RAID 5 Stops Working in 2009), rebutted by various articles, including this one. ![]() Jan 2014: Archival information which time has borne out. Faster drives might perform better, and this graph is from a beta version of SoftRAID 5.Ĭreating a RAID-5 with SoftRAID 5 Appendix: misleading claims about RAID 5 Write performance is still excellent but there is overhead to calculating the parity information and writing it to that 4th drive. Hence its performance for reads is very close to a 3-drive RAID-0 stripe. This particular RAID-5 can be thought of as a 3-drive RAID-0 stripe with parity. The green and blue heavier lines are a RAID-5 stripe using four drives, showing ~450MB/sec reads and ~370MB/sec writes.The middle pair is a 3X stripe, showing ~470 MB/sec.The top pair of lines is a 4-drive RAID-0 stripe, showing ~640 MB/sec. ![]() Write and read performance in MB/sec is shown across a 2TB partition. Rebuilding can take 12-36 hours depending on drive capacity and usage. When (not if) a drive fails, it can be swapped out for the cold spare, which allows the RAID-5 to rebuild the parity information and restore its fault tolerance to a subsequent drive failure. Hence it is critical to have at least one “cold spare” on hand, preferably pre-tested. But a 2nd failure will fail the entire RAID. When a drive fails in a RAID-5, operation continues with no data loss it becomes a RAID-0 stripe. Hence partitioning a RAID-5 into volumes of 4TB each (max) allows simple and fast clone backup to single external 4TB backup drives. TIP: the entire capacity of a RAID-5 might be awkward to backup. Using four 4TB drives, a 4-drive RAID 5 would use that 4 X 4TB = 16TB to deliver a usable capacity of 3 X 4TB = 12TB. With N drives in a RAID-5, the capacity achieved is that of N-1 drives, since one drive is used for parity. RAID 5 requires a minimum of three drives, since two drives could be either a RAID-0 stripe or a RAID-1 mirror or separate drives-that 3rd drive is needed for parity information. For example, eight 4TB drives in a RAID-6 might be configured with two parity drives for a total usable capacity of 24TB. RAID-6 is similar to RAID-5, but uses two or more parity drives for fault tolerance. RAID 5 is usually achieved with hardware (via an enclosure or expansion card), but RAID-5 software solutions will emerge in 2014.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |