[ale] WAS RE: suddenly finding computer 'seized-up', NOW RAID with SATA Drives

Pat Regan thehead at patshead.com
Fri Mar 24 14:19:15 EST 2006


Dan Lambert wrote:
> I've had similar experiences with several manufacturers drives in the past,
> Having to return drives to Seagate, Maxtor, and WD for RMS during warranty,
> and having to jump through all kinds of hoops to get them replaced. Dealing
> with WD was the least painful, but I still had to be without the drives
> during that process.
> 

Drive are so large and so cheap today that I like to make sure I have a
spare available.

With Western Digital I have always used their advance RMA process.  You
give them your credit card number and they immediately ship you a drive.
 I usually get the drive within a few days, then I pack the old drive
right back up in the packaging they sent me.

I usually have good backups, so I don't worry too much about running my
RAID in a degraded state.  ymmv, of course :).

> The only ones that I can honestly say that I have NEVER had a problem with
> were Hitachi drives I've owned. I have had (so far) very similar luck with
> some Samsung drives that I purchased about 2 years ago. So far, they are
> 100% reliable.
> 

Everyone on this list with an opinion will be able to tell us which
drives they prefer, and everyone will give a different answer.  :)

> This brings up something that I've been thinking about, and that is that due
> to the densities (as you mentioned), we seem to be getting less reliable
> drives in the long term, which makes me have to consider going to a higher
> level RAID just for my own desktop and personal use. I'm no super geek, but
> I have a fair amount of experience in building, repairing, and managing PCs.
> I guess that's what really got me interested in LINUX to begin with. I
> couldn't get a high reliability OS from MickeyStinks.
> 

I imagine the quality problems are a combination of platter density and
profit margin.

> At any rate, I have been experimenting with several flavors of Linux over
> the last couple of years, and have gotten fairly comfortable with Debian
> based offerings for desktop use. I have had good experiences with using the
> Ubuntu distro, and currently have one desktop running Ubuntu, and one
> running Kubuntu. I'm still using Centos 4.X on my server, and will most
> likely stay with that unless something comes along that just blows my socks
> off.
> 
> In current Linux distros, particularly the Debian based ones, has anyone
> done any experimenting with implementing a RAID 5 using SATA drives and an
> onboard RAID controller from NVIDIA? All of my current desktop boxes have
> AMD 64 processors, and each of the motherboards has an on board NVIDIA RAID
> controller which can be setup as a RAID 0 or 1. What I'm wondering is if
> there is any implementation that could be used to create a software RAID 5,
> or would one have to purchase a SATA RAID controller card to do this? If
> this is the case, can anyone recommend a RAID controller that would be a)
> reliable, and b) inexpensive enough to use in a desktop computer. 
> 

About 5 years ago I bought a 3ware 7800 (8 port) IDE RAID controller.
It was expensive, but well worth it.  It is absolutely awful at RAID 5,
it has virtually no cache.  It is absolutely lightning fast at RAID 1
and 0 (and I assume 10).

I have been on pretty tight budget at home for the past 2 years, so I
have had to get creative.  A few months ago I decided to experiment a
little on my Ubuntu Breezy desktop machine (with the 3ware card).

I have 3 160 GB drives hooked up the the 3ware card right now.  I have
most of my OS (root and home dirs) on a 30 GB RAID 0.  The rest of the
drives are RAID 5.  I run LVM on top of my RAID sets, btw.

I have 30 GB of the RAID 5 set aside as a mirror of the RAID 0 set.  I
use rdiff-backup to mirror the RAID 0 data onto the equivalently sized
RAID 5 LV.  This happens 4 times per day.  In the (likely! :p) event
that I lose a drive and the RAID 0 gets toasted, I can easily boot off
of the RAID 5 and only lose a few hours data.

When I was setting this up, I did some quick tests using dd with a large
block size.  I just wanted to see what the max throughput would be.

The individual drives had a throughput (on the 3ware controller) of
about 60 or so MB/sec.  The 3 drive RAID 0 was pulling just over 130
MB/sec (it is a 64 bit pci card in a 64 bit slot).  I thought I might be
hitting a bottleneck, so I tried moving a single drive to the on board
IDE.  This lowered the RAID 0 throughput to under 100 MB/sec.

This surprised me, I expected it to either stay the same or go up
slightly.  I do not recall the individual drive speed on the IDE channel.

Anywho, I guess what I am saying is that hardware RAID is very nice (if
you have a good controller).  It is much easier to replace drives and
rebuild your array with a hardware controller.

Software RAID is more flexible.  You can't run multiple RAID levels on
the same set of drives with hardware RAID.  Some would wonder why you
would want to, though :p.

I once again have some disposable income.  I am thinking about buying a
set of 10k RPM Raptors and doing the exact same thing.  I have been very
happy with the results.

Running RAID 5 across 3 drives would give me 320 GB of (slower) usable
space.  With my current set up, I get 30 GB of fast space and about 270
GB of RAID 5.  I am only losing about 20 GB, which isn't even worth
counting.

I just noticed that this is getting to be a very long email, so I am
going to stop now :).

Pat
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: OpenPGP digital signature




More information about the Ale mailing list