[ale] Raid perf?

Pat Regan thehead at patshead.com
Sat Oct 25 05:44:50 EDT 2008


Greg Freemyer wrote:
> Per it, Raid 6 is in theory 3 times less efficient than Raid 10 on writes.
> 

I can't imagine where they could have come up with a number like that.
Raid 5 and 6 are very similar beasts.  They both have a huge performance
issue with writes, especially small random ones.

For raid 5 or 6 to make a small random write, they first have to read a
stripe from every disk, recompute parity, and then flush the stripe back
to disk.  The required read+write can become a huge bottleneck.  It has
been years since I've benchmarked a raid 5, but I recall heavy random
write performance being close to the speed of a single disk no matter
how many drives were in the array.

I found some old bonnie++ numbers that may be of interest.  I know they
are from the same machine, and I am reasonably sure the machine was
using software raid with 4 ide drives at the time.  They pasted in
horribly, so here is a link:

http://patshead.com/bonnie

bonnie++ uses a dataset that should definitely overrun your cache.  Even
the sequential output on the 4 drive raid 0 was quite a bit more than 3
times better than the 4 drive raid 5.  I remember running a similar test
with 15 SCSI drives many, many years ago.  I seem to recall a the raid 0
being nearly 15 times faster.  I wouldn't 100% trust my memory on this
one without testing that one for yourself, though :).

You won't really notice performance problems with raid 5 until your
cache fills up with writes.  Then you'll REALLY notice them.

Any performance problems that apply to raid 5 most definitely apply to
raid 6.

> So to maintain the same write performance as my 4-disk Raid 10 I would
> need a 12-disk Raid 6.
> 

I wouldn't trust that without testing it thoroughly first.  If you want
to keep more redundancy and use less drives you could run a raid 1 of a
pair of 4 disk raid 10 arrays and only use 8 drives  You might even pull
some extra read performance.

With software raid you could even just use 3-way mirrors in your raid
10.  I don't know that that has a name...  It would be a raid 0 over the
top of a pair of 3 way raid 1 arrays.  That would give you better write
performance and more redundancy than your proposed raid 6 but only
require 2 extra drives.  You could lose any 2 drives, and then possibly
2 more if the right drives failed.  I don't want to work out the
percentages for anything more than two drives :).

> That is worse than I realized, but feasible for my needs.  (I think
> 250GB drives are down to $60 or so.  Thus 12 is only $700.  Obviously
> I need the controller card too.  Haven't priced that yet.  I do have
> big chassis that can hold the drives.)

What is your goal?  Do you want two drives worth of redundancy, or do
you just want to reduce the amount of time you have zero redundancy in
the case of a failure?

Raid 10 should rebuild faster than raid 5 or 6 (I hear 6 rebuild pretty
slowly?).  One of the advantages of Linux software raid is that you can
reduce your rebuild times simply by not partitioning the whole disk.  It
is much faster to rebuild 1/4 of your disk than 4/4, that goes for any
raid level you end up choosing.

Another interesting tidbit that may or may not be of use to you is that
the Linux raid 10 driver can stripe to an odd number of drives:

http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10

Pat

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: OpenPGP digital signature
Url : http://mail.ale.org/pipermail/ale/attachments/20081025/90a2b04d/attachment-0001.bin 


More information about the Ale mailing list