[ale] Onboard RAID

Michael B. Trausch mike at trausch.us
Wed Nov 16 14:28:22 EST 2011


On 11/16/2011 02:12 PM, Greg Clifton wrote:
> More details this is a new server (Single Proc Xeon X3440) with only 10
> users, so it won't be heavily taxed. Moving the storage to a different
> Linux box really isn't an option either. We're replacing an OLD server
> running NT with the 2008 server. 

Depending on the reason why it "isn't an option", it might be worth
pushing back on.  The whole point of separating it out is because
Windows server sucks, even with only 10 users on it.  The way it
operates sucks, the way it treats things on the disks sucks, the overall
speed of data access sucks.  Keep a single disk in the Windows server
(maybe mirrored) that is the system disk, and put everything else
somewhere else.  Don't want a Linux box, then get a RAID array box that
hooks up to the Windows box with a single eSATA connection and call it a
day.  That is better than having Windows sort it out.

> What you are saying is that SOFTWARE is "more better" in all cases than
> the BIOS based RAID configuration. OK, but does Server 2008 support RAID
> 10? If not, we must rely on the BIOS RAID.

And you do NOT want to rely on BIOS RAID.  At all, period, never.  Bad
idea, bad call.  I have seen *many* BIOS RAID setups fail for a wide
variety of reasons, but most of the time it seems to me that it is
because some component of the implementation of them is buggy.  It
happens frequently enough that I wouldn't trust hourly snapshotted data
on such a storage mechanism, I'll say that much.

> If we must do that then the question falls back to which is the better
> RAID option [under Windows].
> I saw something on some RAID forum that said the Adaptec was for Linux
> OS and the Intel for MS OS. Since Adaptec drivers are built into Linux,
> that at least makes some sense.

Adaptec has drivers for Windows as well.

The thing is that with hardware RAID it doesn't matter: you cannot
upgrade, you are not portable.  It is a dangerous option.

Consider this:  what happens if your disk controller fails?  If that
disk controller does RAID, and it has been discontinued, you may be
looking at a whole RAID rebuild instead of just a hardware swap-out.  In
other words, with hardware RAID, it's far more likely that an outage is
going to last forever because you'll have to start over and rebuild the
array, restoring data to it.

If the thing that fails is a box running Linux with four disks in it,
you replace the box and move the disks over and you're done.  If you
have a spare box on hand, you can be up in ten minutes.

If you *are* going to go the hardware RAID route, make sure you have a
spare, identical controller in stock in case of failure.  I've seen it
happen where RAID controllers were incompatible after seemingly minor
changes (device F00e vs. F00f might be two completely different things,
same for F00e and F00e+) to the model number.

And just don't use fakeraid (that is, BIOS provided RAID).  It is simply
not a viable option if you like uptime and robustness.

	--- Mike

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 729 bytes
Desc: OpenPGP digital signature
Url : http://mail.ale.org/pipermail/ale/attachments/20111116/ca2188d6/attachment-0001.bin 


More information about the Ale mailing list