[ale] Hot Point RAID and Linux

Mike Panetta ahuitzot at mindspring.com
Wed Aug 14 10:04:03 EDT 2002


On Wed, 2002-08-14 at 07:55, Stuffed Crust wrote:
> On Wed, Aug 14, 2002 at 06:23:11AM -0700, Mike Panetta wrote:
> > In my experience Linux software raid is faster then the hardware
> > controller cards.  Its not necessarily as robust though.  The company I
> > used to work for benchmarked several different hardware cards (the 3ware
> > escalade with 7200RPM ata 100 drives on it, a mylex dac1100, and a
> > couple of AMI Megaraid cards) none of them could outperform the Linux
> > software raid solution.  We used the same exact drives to compare, so
> > its not like we were using 7200RPM drives with the Mylex ctrlr, and
> > 15000RPM drives with linux RAID. ;)
> 
> Now did you test them with RAID5 or just RAID0/RAID1?

We did tests with varying raid levels, but we concentrated on RAID 5. 
The market we were targeting wanted large amounts of storage rather then
security of data (for the most part) so RAID 1 was not looked at that
much.  RAID 0 is not even really raid (there is no redundancy) so we did
not test it in benchmarks, only made sure we could configure it.  Most
of our benchmarks were in RAID 5 mode.

> 
> Having a hardware XOR engine (ie the host CPU doesn't do the work) in a
> RAID5 setup makes a rather substantial difference in overall system
> throughput.

Yeah, but its still a CPU thats doing the XOR work in the case of the
(relatively cheap) hardware RAID Cards.  Usually they have an I960 or eq
that does all the work, its not that good of a data pump really.  I have
not seen a RAID card that has an actuall piece of hardware that does the
XOR.  That does not mean that one does not exist though.

> 
> Remember, it doesn't matter if the host CPU can crunch the numbers faster
> if it's supposed to be running server code. 
 
These tests were done over a samba link using netbench, does that count?
:)  We never quoted any results we got with bonnie.  Most if not all our
testing was done with netbench.  Oh, and we were a disk server only, so
we did not have to worry about server side processes sucking up any CPU
time (other then samba or NFS or whatever of course).

> 
> Factor in the bus I/O overhead of having to perform the piles of
> read/writes necessary to update the RAID5 parity stuff, and hardware is
> the only way to go.

I can see where it would be in a larger environment for sure.  But in a
system that has (usually) no more then 4-8 disks, then its less of an
issue.  

> 
>  - Pizza
> -- 
> Solomon Peachy                                   pizza at f*cktheusers.org
> I'm not broke, but I'm badly bent.                         ICQ #1318344
> Patience comes to those who wait.                         Melbourne, FL
>                Quidquid latine dictum sit, altum viditur

Mike


---
This message has been sent through the ALE general discussion list.
See http://www.ale.org/mailing-lists.shtml for more info. Problems should be 
sent to listmaster at ale dot org.






More information about the Ale mailing list