[ale] They say drives fail in pairs...

gcs8 gcsviii at gmail.com
Tue Jan 3 18:57:49 EST 2012


this is what i built for the house, it does pretty well, gcs8.org/san

On Tue, Jan 3, 2012 at 6:44 PM, Jim Kinney <jim.kinney at gmail.com> wrote:

>
>
> On Tue, Jan 3, 2012 at 6:29 PM, Michael Trausch <mike at trausch.us> wrote:
>
>> On 01/03/2012 04:52 PM, Lightner, Jeff wrote:
>> > That confuses me.  Does ZFS have built in redundancy of some sort
>> > that would obviate the need for the underlying storage to be hardware
>> > RAID?  Or are you saying you'd use ZFS rather than Software RAID?
>>
>> Both ZFS and btrfs have redundancy capabilities built-in that
>> (allegedly!) play nicely with the filesystem's built-in dynamic resizing
>> volume management stuff.  Neither filesystem is "just" a filesystem, but
>> aims to be a whole volume-management stack.  No more need for things
>> like LVM, when all you need to do is create an fs on a single whole
>> drive (no partition table) and hot-add or hot-remove it from the pool of
>> storage.
>>
>> The other nifty thing is that they can do redundant data storage on even
>> a single device, as I understand it, so that you can do things like have
>> the same data on a single drive in multiple locations, which helps if
>> one area of the drive goes bad.
>>
>> I don't use hardware RAID for anything (and I'm not likely to ever do
>> so).  If I ever needed storage that went beyond what a few hard disks
>> could provide, or something that needed to be larger than what I would
>> trust something like ZFS or btrfs to do on their own, I would probably
>> build a dedicated rack-mount box that had tens of drives in it and use
>> something like RAID 10 with three stripes.
>>
>> There was a "DIY" guide to building such a box, along with lists of
>> hardware and tools needed to build the things, and claiming something
>> like 100+ TB of storage in a single box.  They're expensive in absolute
>> dollars, but relatively inexpensive compared to other solutions that
>> scale that far up in storage space, and they are powered by Linux
>> software RAID (AFAIK).  You would use the things such that you could
>> replace standalone failed drives off-line, and replace whole units in
>> (ideally) only as long as it takes to power one down and install a new
>> one.
>>
>
>
> http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
>
> 45 drives in a 4U = 135TB (and a back brace!)
>
>>
>> I'm not anywhere near that, yet, though.  I can only really forsee
>> needing to grow to about 6 TB of reliable storage in the next two years,
>> but given the high rates of change in everything around me at the
>> moment, I can't really look much farther than that.
>>
>>        --- Mike
>>
>> --
>> A man who reasons deliberately, manages it better after studying Logic
>> than he could before, if he is sincere about it and has common sense.
>>                                    --- Carveth Read, “Logic”
>>
>>
>> _______________________________________________
>> Ale mailing list
>> Ale at ale.org
>> http://mail.ale.org/mailman/listinfo/ale
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>> http://mail.ale.org/mailman/listinfo
>>
>>
>
>
> --
> --
> James P. Kinney III
>
> As long as the general population is passive, apathetic, diverted to
> consumerism or hatred of the vulnerable, then the powerful can do as they
> please, and those who survive will be left to contemplate the outcome.
> - *2011 Noam Chomsky
>
> http://heretothereideas.blogspot.com/
> *
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>


-- 
Charles Selfridge

PBYC  IT director

(404) 910-3409
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20120103/fc82a556/attachment-0001.html 


More information about the Ale mailing list