[ale] Giant storage system suggestions

Ted W ted at techmachine.net
Thu Jul 12 08:16:27 EDT 2012


On Jul 12, 2012, at 3:20 AM, gcs8 wrote:
> I built a large zfs pool for my personal use hear at the house, ( gcs8.org/san ) iscsi has good throughput and you can have a small server take care of sharing it out from there. I use freenas and it has served me pretty well. I have ~35.6tb after raid z2. The theory for my design was that if I lose any hardware I can replace it with what ever sense freenas it taking care of my disks not the hardware. If I could change 2 things about my setup I would try and get infinaban or at least 10gb eth, and use a ssd for cacheing.
> 
> Now I can't afford to keep a second one to rsync to but I do use crash plain to back it up, works fine I have 8.7tb backed up with them right now. Just my. 02 cents.
> 
> from gcs8's mobile device.
> 
> On Jul 12, 2012 1:23 AM, "Matthew" <simontek at gmail.com> wrote:
> Look into areca cards, good bang for the buck. Also email contact at marymconley.com wife worked with pentabyte systems for hollywood systems
> 
> On Wednesday, July 11, 2012, Jeff Layton <laytonjb at att.net> wrote:
> > Alex,
> >
> > I work for a major vendor and we have solutions that scale larger
> > than this but I'm not going to give you a commercial or anything,
> > just some advice.
> >
> > I have friends and customers who have tried to go the homemade
> > route ala' Backblaze (sorry for those who love BB but I can tell
> > you true horror stories about it) and have lived to regret it. Just
> > grabbing a few RAID cards and some drives and slapping them
> > together doesn't really work (believe me - I've tried it myself as
> > have others). I recommend buying enterprise grade hardware, but
> > that doesn't mean it has to be expensive. You can get well under
> > $0.50/GB with 3 years of full support all the way to the file system.
> > Now sure if this meets your budget or not - it may be a bit higher
> > than you want.
> >
> > I can also point you to documentation we publish that explains
> > in gory detail how we build our solutions. All the commands and
> > configurations are published including the tuning we do. But
> > as part of this, I highly recommend XFS. We scale it to 250TB's
> > with no issue and we have a customer who's gone to 576TB's
> > for a lower performance file system.
> >
> > I also recommend getting a server with a reasonable amount
> > of memory in case you need to do an fsck. Memory always
> > helps. I would also think about getting a couple of small 15K
> > drives and running them as RAID-0 for a swap space. If the
> > file system starts and fsck and swaps (which can easily do
> > for larger file systems) you will be grateful - fsck performance
> > is much, much better and takes less time.
> >
> > If you want to go a bit cheaper, then I recommend going the
> > Gluster route. You can get it for free and it only takes a bunch
> > of servers. However, if the data is important, then build two
> > copies of the hardware and rsync between them - at least you
> > have a backup copy at some point.
> >
> > Good luck!
> >
> > Jeff
> >
> >
> > ________________________________
> > From: Alex Carver <agcarver+ale at acarver.net>
> > To: Atlanta Linux Enthusiasts <ale at ale.org>
> > Sent: Wed, July 11, 2012 5:21:08 PM
> > Subject: Re: [ale] Giant storage system suggestions
> >
> > No, performance is not the issue, cost and scalability are the main
> > drivers.  There will be very few users of the storage (at home it would
> > just be me and a handful of computers) and at work it would be maybe
> > five to ten people at most that just want to archive large data files to
> > be recalled as needed.
> >
> > Safety is certainly important but I don't want to burn too many disks to
> > redundancy and lose storage space in the array.  I didn't plan to have
> > one monolithic RAID5 array either since that would get really slow which
> > is why I first thought of small arrays (4-8 disks per array) merged with
> > each other into a single logical volume.
> >
> > On 7/11/2012 14:12, Lightner, Jeff wrote:
> >> If you're looking at stuff on that scale is performance not an issue?  There are disk arrays that can go over fibre and if it were me I'd probably be looking  at those especially if performance was a concern.
> >>
> >> RAID5 is begging for trouble - losing 2 disks in a RAID5 means the whole RAID set is kaput.  I'd recommend at least RAID6 and even better (for performance) RAID10.
> >>
> >>
> >>
> >>
> >>


I'll second doing the RAID in software with ZFS. We have a few OpenIndiana and FreeBSD systems in the office that are using RAID z2 and it's worked very well.

As an enterprise solution, we have used a two different products in our data center. We started with NetApp. We have a 24TB RAID DP SAS array with them for hosting our high value, high performance stuff (databases, etc). For Windows file storage we're using Win2k8 server with an HP MSA60 strapped on underneath. That system is using RAID6 and replicates off-site to our CrashPlan server in our off-site data center, this is done over 10G fiber and has worked very well for us.
-- 
Ted W. < Ted at Techmachine.net >
Registered GNU/Linux user #413569




-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20120712/3697b8e4/attachment.html 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
Url : http://mail.ale.org/pipermail/ale/attachments/20120712/3697b8e4/attachment.bin 


More information about the Ale mailing list