[ale] EVMS + software RAID??

Jeffrey B. Layton laytonjb at bellsouth.net
Fri Dec 6 15:48:36 EST 2002


Trey Darley wrote:

> Well, thanks!
> I was mainly concerned with the question of whether there is any  
> greater virtue associated with doing evms (or lvm) before mirroring 
> or  vice-versa. 


To be honest, I don't know of any without planning out
out well in advance what you want to do. To be honest,
I think I would use LVM first then use software RAID.
IMHO it's easier to rebuild a mirror RAID array than
a LV. So make PV's out of your partitions on the drive,
then make LV's on them as you want. Do the same
thing on boh drives, then use software RAID to mirror
them.

>
> Thanks for the feedback on EVMS.
> As for the question of 2.5 direction, I guess I ought to get my act  
> together and find a rss feed for the changelogs. :-)
> Re: the virtue of mirroring swap. I have followed some discussions on  
> this topic. The idea was put forward that if a disk failed (with  
> striped swap) Mr. Kernel might get pretty annoyed. On the other hand,  
> by mirroring swap you ought to be protected from a kernel fault in 
> the  case of a disk failure. (Admittedly a inexcusably ram-starved box 
> to be  swapping so.)


I don't think you have anything to worry about as far as
striped swapping is concerned. If you are swapping that
heavily you have other problems. Besides, if the kernel
gets bent out of shape, then modify /etc/fstab not to
mount the other swap space. Here's a reference on stripping
a swap space (i.e. you don't have to use software RAID
to do this).

http://www-106.ibm.com/developerworks/library/swaptip2.html


Hope this helps!

Jeff

>
>
> --Trey
>
> On Thursday, December 5, 2002, at 04:45 PM, Jeffrey B. Layton wrote:
>
>> trey wrote:
>>
>>> Has anyone had any experience with this combo? I know that I did a
>>> similar thing a year or two ago with a Solaris 8 system using their
>>> Disksuite. It was like, create stripes, then create metadevices, then
>>> create cats?? Something like that. I guess the long and short of it is
>>> that I want to mirror two 120gig drives. Partition table is like:
>>> root - 5000 meg
>>> swap - 2000 meg
>>>
>>> The remaining 113gig or so I want to be able to create metadevices and
>>> have some flexibility in resizing them dynamically as needed, without
>>> making my mirror desync.
>>> I guess the question is multi-part.
>>>
>>> 1 - Does anyone have any experience with EVMS? Any caveats?
>>> 2 - Do I want to create one big 113 gig filesystem, convert it to a
>>> metadevice, then mirror that metadevice, then subdivide that
>>> metadevice into "partitions" ? Or have I mucked up the order of
>>> operations here?
>>>
>>
>>   Let's see... I'm not sure I follow what you want to do. You've got  
>> two
>> 120 gig drives with 5 Gigs for root and 2 gigs for swap on each.
>> Is this correct so far? Do you want to mirror root? (It doesn't make
>> much sense to mirror swap, but you can strip swap across the
>> two drives). So, do you want to mirror the first partition (/dev/hda1)
>> on each which contains root (RAID-1)? Then /dev/hda2 is used for
>> swap on each drive (I'll have to look at around at how you setup
>> striping for swap).
>>  Then with the remaining about 113 Gigs you want to mirror them,
>> but have the ability to grow? If so, you can use software RAID-1
>> to mirror the partitions (for the sake of argument /dev/hda3) and
>> then build a logical volume on top of this mirror. Or you do the
>> opposite, build a LV on each drive, then use software RAID-1
>> to mirror them. I don't know which one would be better or why.
>>   Of course, you can carve up the 113 Gigs as you want and mix
>> and match software RAID and LVM.
>>   On the topic of EVMS, I have used it in the past. I really liked it.
>> However, if you have followed the 2.5 kernel development, EVMS
>> is toast except for the user portion of the code. EVMS will be built
>> on top of the Device Manager (DM) and it appears that LVM2 will
>> be put into the 2.5 kernel RSN. I use LVM at work and it's just fine
>> (it's very similiar to the HP LVM). I had only one problem when a
>> disk in the LVM died (one of the PV's). EVMS allowed me to
>> mount the LV as read-only and pull whatever data I could off of the
>> LV.
>>   If you need to read more about LVM go to www.sistina.com and
>> look for LVM. To learn more about software RAID, I recommend
>> the following articles:
>>
>> http://www-106.ibm.com/developerworks/linux/library/l-raid1/index.html
>> http://www-106.ibm.com/developerworks/linux/library/l-raid2/ 
>> ?dwzone=linux
>>
>> Good Luck!
>>
>> Jeff
>>
>>
>>
>>
>>
>>>
>>> -- Trey
>>> +++--------------------------------------------------------------+++
>>>
>>> Trey Darley - Chief Technical Monkey
>>> AIS Computers - www.aiscomputers.com
>>> 165 Carnegie Place
>>> Fayetteville, GA 30214
>>> Work: 770.461.2147, ext. 128
>>> Mobile: 404.455.1516
>>>
>>> [Please note that the opinions I express are not to be in any way
>>> construed as those of AIS, unless that is expressly stated.]
>>>
>>> +++--------------------------------------------------------------+++
>>> _______________________________________________
>>>
>



_______________________________________________
Ale mailing list
Ale at ale.org
http://www.ale.org/mailman/listinfo/ale






More information about the Ale mailing list