[ale] Giant storage system suggestions

gcs8 gcsviii at gmail.com
Thu Jul 12 03:20:03 EDT 2012


I built a large zfs pool for my personal use hear at the house, (
gcs8.org/san ) iscsi has good throughput and you can have a small server
take care of sharing it out from there. I use freenas and it has served me
pretty well. I have ~35.6tb after raid z2. The theory for my design was
that if I lose any hardware I can replace it with what ever sense freenas
it taking care of my disks not the hardware. If I could change 2 things
about my setup I would try and get infinaban or at least 10gb eth, and use
a ssd for cacheing.

Now I can't afford to keep a second one to rsync to but I do use crash
plain to back it up, works fine I have 8.7tb backed up with them right now.
Just my. 02 cents.

from gcs8's mobile device.
On Jul 12, 2012 1:23 AM, "Matthew" <simontek at gmail.com> wrote:

> Look into areca cards, good bang for the buck. Also email
> contact at marymconley.com wife worked with pentabyte systems for hollywood
> systems
>
> On Wednesday, July 11, 2012, Jeff Layton <laytonjb at att.net> wrote:
> > Alex,
> >
> > I work for a major vendor and we have solutions that scale larger
> > than this but I'm not going to give you a commercial or anything,
> > just some advice.
> >
> > I have friends and customers who have tried to go the homemade
> > route ala' Backblaze (sorry for those who love BB but I can tell
> > you true horror stories about it) and have lived to regret it. Just
> > grabbing a few RAID cards and some drives and slapping them
> > together doesn't really work (believe me - I've tried it myself as
> > have others). I recommend buying enterprise grade hardware, but
> > that doesn't mean it has to be expensive. You can get well under
> > $0.50/GB with 3 years of full support all the way to the file system.
> > Now sure if this meets your budget or not - it may be a bit higher
> > than you want.
> >
> > I can also point you to documentation we publish that explains
> > in gory detail how we build our solutions. All the commands and
> > configurations are published including the tuning we do. But
> > as part of this, I highly recommend XFS. We scale it to 250TB's
> > with no issue and we have a customer who's gone to 576TB's
> > for a lower performance file system.
> >
> > I also recommend getting a server with a reasonable amount
> > of memory in case you need to do an fsck. Memory always
> > helps. I would also think about getting a couple of small 15K
> > drives and running them as RAID-0 for a swap space. If the
> > file system starts and fsck and swaps (which can easily do
> > for larger file systems) you will be grateful - fsck performance
> > is much, much better and takes less time.
> >
> > If you want to go a bit cheaper, then I recommend going the
> > Gluster route. You can get it for free and it only takes a bunch
> > of servers. However, if the data is important, then build two
> > copies of the hardware and rsync between them - at least you
> > have a backup copy at some point.
> >
> > Good luck!
> >
> > Jeff
> >
> >
> > ________________________________
> > From: Alex Carver <agcarver+ale at acarver.net>
> > To: Atlanta Linux Enthusiasts <ale at ale.org>
> > Sent: Wed, July 11, 2012 5:21:08 PM
> > Subject: Re: [ale] Giant storage system suggestions
> >
> > No, performance is not the issue, cost and scalability are the main
> > drivers.  There will be very few users of the storage (at home it would
> > just be me and a handful of computers) and at work it would be maybe
> > five to ten people at most that just want to archive large data files to
> > be recalled as needed.
> >
> > Safety is certainly important but I don't want to burn too many disks to
> > redundancy and lose storage space in the array.  I didn't plan to have
> > one monolithic RAID5 array either since that would get really slow which
> > is why I first thought of small arrays (4-8 disks per array) merged with
> > each other into a single logical volume.
> >
> > On 7/11/2012 14:12, Lightner, Jeff wrote:
> >> If you're looking at stuff on that scale is performance not an issue?
> There are disk arrays that can go over fibre and if it were me I'd probably
> be looking  at those especially if performance was a concern.
> >>
> >> RAID5 is begging for trouble - losing 2 disks in a RAID5 means the
> whole RAID set is kaput.  I'd recommend at least RAID6 and even better (for
> performance) RAID10.
> >>
> >>
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of
> Alex Carver
> >> Sent: Wednesday, July 11, 2012 5:04 PM
> >> To: Atlanta Linux Enthusiasts
> >> Subject: [ale] Giant storage system suggestions
> >>
> >> I'm trying to design a storage system for some of my data in a way that
> will be useful to duplicate the design for a project at work.
> >>
> >> Digging around online it seems that a common suggestion has been a good
> motherboard, a SATA/SAS card, a SATA/SAS expander, and then a huge chassis
> to support all of the SATA drives.
> >>
> >> It looks like one of the recommended SATA/SAS cards is an LSI 9200
> series card connected to an Intel RES2SV240 expander.
> >>
> >> What I'm trying to achieve is continually expandable storage space.  As
> more storage is required, I just keep slipping drives into the system.
> >> If I max out a case, I just add a SATA/SAS card, use external SATA/SAS
> cables (do those exist to go from SFF-8087 to SFF-8088?), another expander
> and then stretch into a new case.
> >>
> >> It's obviously going to run linux or I wouldn't be asking here. :)  The
> entire storage system will probably start somewhere around 10-16 TB and
> grow from there.  The first question would be suggestions for an optimal
> >> configuration of the disks.  For example, should the drives be grouped
> >> into say RAID-5 arrays with four devices per array and then logically
> combine them in software into a single storage volume?  If so, what file
> system will support something that could potentially reach beyond 100 TB
> (not that I'd reach 100 TB anytime soon but it can happen)?
> >>
> >> Thanks,
> >> _______________________________________________
> >> Ale mailing list
> >> Ale at ale.org
> >> http://mail.ale.org/mailman/listinfo/ale
> >> See JOBS, ANNOUNCE and SCHOOLS lists at
> >> http://mail.ale.org/mailman/listinfo
> >>
> >>
> >>
> >>
> >> Athena(r), Created for the Cause(tm)
> >> Making a Difference in the Fight Against Breast Cancer
> >>
> >> ---------------------------------
> >> CONFIDENTIALITY NOTICE: This e-mail may contain privileged or
> confidential information and is for the sole use of the intended
> recipient(s). If you are not the intended recipient, any disclosure,
> copying, distribution, or use of the contents of this information is
> prohibited and may be unlawful. If you have received this electronic
> transmission in error, please reply immediately to the sender that you have
> received the message in error, and delete it. Thank you.
> >> ----------------------------------
> >>
> >>
> >> _______________________________________________
> >> Ale mailing list
> >> Ale at ale.org
> >> http://mail.ale.org/mailman/listinfo/ale
> >> See JOBS, ANNOUNCE and SCHOOLS lists at
> >> http://mail.ale.org/mailman/listinfo
> >>
> >>
> >
> >
> > _______________________________________________
> > Ale mailing list
> > Ale at ale.org
> >
>
> --
> SimonTek
> 912-398-6704
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20120712/daa3a55e/attachment-0001.html 


More information about the Ale mailing list