[ale] Needing to cut up server disk space

Jim Kinney jim.kinney at gmail.com
Fri Sep 25 13:09:45 EDT 2015


On Sep 25, 2015 9:05 AM, "Lightner, Jeff" <JLightner at dsservices.com> wrote:
>
> If you install Dell Open Manage it:
>
> a)      Lets you see details of the PERC and the drives via web (on port
1311)
>
> b)      Monitors the status of the PERC and drives and updates
/var/log/messages so you could use logwatch to get messages.
>
> c)       Adds snmp capability so you can monitor via snmp using your
favorite tool (e.g. Nagios).
>
>
>
> We use PERC in most of our PowerEdge systems.   The first RAID set we
configure if we have enough disks is reserved for OS components and we’ll
usually have 2 partitions of that LUN, /dev/sda1 = /boot,  /dev/sda2 = LVM
PV for vg00.   We usually will put remaining disks in a secondary RAID so
that lun becomes /dev/sdb and we put that in a separate VG without
partitioning at all and use that for non-OS components (e.g. third party
apps and databases).
>
>
>
> Note:
>
> PERC cards can fail so you’d want to be sure you have a spare if this
system isn’t under support.   The good news is that a replacement PERC can
learn your RAID setup from the drives themselves.   Most the PERC (except
early PERC 2 and earlier) are OEM from LSI.    I think in 10 years I’ve
only seen about 1 PERC failure per year with an average setup of more than
100 servers at any given point.

Old perc used to have a mandatory battery exercise feature. It was almost
guaranteed the card would hiccup at the point when the battery was fully
drained and cause a raid error that required recovery efforts. That was the
first sign of imminent card failure.
>
>
>
>
>
> From: ale-bounces at ale.org [mailto:ale-bounces at ale.org] On Behalf Of Jeff
Hubbs
> Sent: Thursday, September 24, 2015 11:54 PM
> To: ale at ale.org
> Subject: Re: [ale] Needing to cut up server disk space
>
>
>
> I appreciate all the responses.
>
> So I guess what I'm hearing is 1) get over my HW RAID hate, RAID5 the lot
using the PERC, and slice and dice with LVM or 2) forgo RAID altogether,
use the PERC to make some kind of "appended" 2TB volume, and slice and dice
with LVM. I'm willing to give up some effective space to not have a dead
box if a drive fails; just because it's a lab machine doesn't mean people
won't be counting on it. I'm okay with that as long as I have a way to
sense a drive failure flagged by the PERC.
>
>
>
> On 9/24/15 7:27 AM, Solomon Peachy wrote:
>>
>> On Wed, Sep 23, 2015 at 11:42:37PM -0400, Jeff Hubbs wrote:
>>>
>>>  * I really dislike hardware RAID cards like Dell PERC. If there has to
>>>
>>>    be one, I would much rather set it to JBOD mode and get my RAIDing
>>>
>>>    done some other way.
>>
>>
>>
>> There's a big difference between "hardware" RAID (aka fakeRAID) and real
>>
>> hardware RAID boards.  The former are the worst of both worlds, but the
>>
>> latter are the real deal.
>>
>>
>>
>> In particular, the various Dell PERC RAID adapters are excellent, fast,
>>
>> and highly reliable, with full native linux support for managing them.
>>
>>
>>
>> Strictly speaking you'll end up with more flexibility going the JBOD
>>
>> route, but you're going to lose both performance and reliability versus
>>
>> the PERC.
>>
>>
>>
>> (for example, what happens if the "boot" drive fails?  Guess what, your
>>
>>  system is no longer bootable with the JBOD, but the PERC will work just
>>
>>  fine)
>>
>>
>>>
>>>  * I foresee I will have gnashing of teeth if I set in stone at install
>>>
>>>    time the sizes of the /var and /home volumes. There's no telling how
>>>
>>>    much or how little space PostgreSQL might need in the future and you
>>>
>>>    know how GRAs are - give them disk space and they'll take disk
space. :)
>>
>>
>>
>> You're not talking about much space here; only 5*400 == 2TB of raw
>>
>> space, going down to 1.6TV by the time the RAID5 overhead is factored
>>
>> in.  Just create a single 2TB filesystem and be done with it.
>>
>>
>>
>> FWIW, If you're after reliability I'd caution against btrfs, and instead
>>
>> recommend XFS -- and make sure the system is plugged into a UPS.  No
>>
>> matter what, be sure to align the partition and filesystem with the
>>
>> block/stripe sizes of the RAID setup.
>>
>>
>>
>> (The system I'm typing this on has about ~10TB of XFS RAID5 filesystems
>>
>>  hanging off a 3ware 9650 card, plus a 1TB RAID1 for the OS)
>>
>>
>>
>>  - Solomon
>>
>>
>>
>>
>> _______________________________________________
>>
>> Ale mailing list
>>
>> Ale at ale.org
>>
>> http://mail.ale.org/mailman/listinfo/ale
>>
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>>
>> http://mail.ale.org/mailman/listinfo
>
>
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.ale.org/pipermail/ale/attachments/20150925/dc7eed8a/attachment.html>


More information about the Ale mailing list