[ale] Suse migrations II

Jim Kinney jim.kinney at gmail.com
Tue Apr 19 22:47:11 EDT 2011


My bad. I thought they were joined at the wallet.

I will still avoid like the plague as it sounds easiest route to happy.

On Tue, Apr 19, 2011 at 7:40 PM, scott boss <scott at sboss.net> wrote:

> Wrong hitachi.
>
> Hitachi disk drive unit was sold
> Hitachi disk array unit was NOT sold.
>
>
> Sent from my iPhone
>
> On Apr 19, 2011, at 7:35 PM, Jim Kinney <jim.kinney at gmail.com> wrote:
>
> What a nightmare! Hitachi is being bought by Seagate so maybe that crappy
> design will hit the bit bucket.
>
> On Tue, Apr 19, 2011 at 6:48 PM, Greg Freemyer < <greg.freemyer at gmail.com>
> greg.freemyer at gmail.com> wrote:
>
>> Damon,
>>
>> Hitachi is weird mainframe storage.  It has weird workarounds that
>> won't apply to your new SAN.  Your new SAN should be able to export
>> the whole /dev/sddlmag1 sized logical volume, as a single volume.  So
>> you really don't care how it came to be in a Hitachi world.
>>
>> == but maybe this will help you satisfy your curiosity
>>
>> you saw these links in your output, right?
>>
>> scsi-1HITACHI_770131020049 -> ../../sdb
>> scsi-1HITACHI_770131020050 -> ../../sda
>> scsi-1HITACHI_770131020051 -> ../../sdc
>> scsi-1HITACHI_770131020091 -> ../../sdd
>> scsi-1HITACHI_770131020126 -> ../../sde
>> scsi-1HITACHI_770131020127 -> ../../sdf
>> scsi-1HITACHI_770131020128 -> ../../sdg
>>
>> So that I believe is your 7 volumes as exported by Hitachi.
>>
>> It has been a while, but if I recall correctly, the Hitachi hardware
>> can only export relatively small volumes.  So each of the above I
>> suspect is small.  Maybe 64GB each?
>>
>>  (My memory is actually 4GB max per Hitachi volume, but I don't trust
>> that memory.)
>>
>> Hitachi then provided their own proprietary LVM like software to merge
>> all the above together.  The reason being they wanted the same CLI
>> tools to work in various UNIX/Linux environment.
>>
>> If one of your co-workers is a Hitachi storage person, they can
>> probably tell you what tools are used to see how the 7 volumes are
>> combined into one.
>>
>> Then it looks like LVM is layered on top to break it back apart.
>>
>> It may sound crazy, but if I recall right, a large Hitachi setup could
>> have literally hundreds of exported volumes, so trusting native volume
>> management tools to work was not a good idea.
>>
>>
>> Greg
>>
>> On Tue, Apr 19, 2011 at 12:03 PM, Damon Chesser < <dchesser at acsi2000.com>
>> dchesser at acsi2000.com> wrote:
>> > Here is my problem:  I have 7 mount that are from FC on a hitatchi.
>> Suse
>> > 10.2 on a HP server (6 series?) with two qlogic cards.
>> >
>> >
>> >
>> > I have one phys. Drive presented on the server /dev/ccsis/c0d0
>> >
>> >
>> >
>> > LVM is in use, everything is mounted as a LV except for /boot and /root
>> on
>> > c0d0 (c0d0p1, c0d0p2)
>> >
>> >
>> >
>> > I can run vgdisplay –v FOO and find out what “drive” FOO is mounted on:
>> in
>> > part:
>> >
>> >
>> >
>> > --- Logical volume ---
>> >
>> >   LV Name                /dev/FOO/FOO
>> >
>> >  VG Name                FOO
>> >
>> >   LV UUID                ZC0ZlW-UK4r-TC6j-Yvj2-qf7Y-34Dc-eNL5On
>> >
>> >   LV Write Access        read/write
>> >
>> >   LV Status              available
>> >
>> >   # open                 2
>> >
>> >   LV Size                248.99 GB
>> >
>> >   Current LE             63742
>> >
>> >   Segments               1
>> >
>> >   Allocation             inherit
>> >
>> >   Read ahead sectors     0
>> >
>> >   Block device           253:0
>> >
>> >
>> >
>> >   --- Physical volumes ---
>> >
>> >   PV Name               /dev/sddlmag1
>> >
>> >   PV UUID               kkzkQD-uxqX-Sgp1-bS0j-jWhq-AoI0-kASm6O
>> >
>> >   PV Status             allocatable
>> >
>> >   Total PE / Free PE    63742 / 0
>> >
>> >
>> >
>> > What the heck is /dev/sddlmag1  (that is SDDLmag1)?  I have all my FC
>> > mounted as /dev/SDDLxxxx and I can’t map that to any of the below:
>> >
>> >
>> >
>> >
>> >
>> > ls -la /dev/sddlmag1
>> >
>> > brw-r----- 1 root disk 251, 97 Mar 21 09:31 /dev/sddlmag1
>> >
>> >
>> >
>> > I know by the size of the various LVM Volume Groups and by fdisk –l
>> /dev/sdX
>> > that there is a one to one correlation between sddlxxxx and /dev/sdx,
>> but
>> > except for one drive(FC mount) there are multiple choices based on size.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > /dev/disk # ls -la by-id/
>> >
>> > total 0
>> >
>> > drwxr-xr-x 2 root root 400 Mar 21 09:31 .
>> >
>> > drwxr-xr-x 5 root root 100 Mar 21 09:31 ..
>> >
>> > lrwxrwxrwx 1 root root  16 Mar 21 09:31
>> > cciss-3600508b10010343956584f3134380001 -> ../../cciss/c0d0
>> >
>> > lrwxrwxrwx 1 root root  18 Mar 21 09:31
>> > cciss-3600508b10010343956584f3134380001-part1 -> ../../cciss/c0d0p1
>> >
>> > lrwxrwxrwx 1 root root  18 Mar 21 09:31
>> > cciss-3600508b10010343956584f3134380001-part2 -> ../../cciss/c0d0p2
>> >
>> > lrwxrwxrwx 1 root root  18 Mar 21 09:31
>> > cciss-3600508b10010343956584f3134380001-part3 -> ../../cciss/c0d0p3
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 scsi-1HITACHI_770131020049 ->
>> > ../../sdb
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31 scsi-1HITACHI_770131020049-part1
>> ->
>> > ../../sdb1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 scsi-1HITACHI_770131020050 ->
>> > ../../sda
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31 scsi-1HITACHI_770131020050-part1
>> ->
>> > ../../sda1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 scsi-1HITACHI_770131020051 ->
>> > ../../sdc
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31 scsi-1HITACHI_770131020051-part1
>> ->
>> > ../../sdc1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 scsi-1HITACHI_770131020091 ->
>> > ../../sdd
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31 scsi-1HITACHI_770131020091-part1
>> ->
>> > ../../sdd1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 scsi-1HITACHI_770131020126 ->
>> > ../../sde
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31 scsi-1HITACHI_770131020126-part1
>> ->
>> > ../../sde1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 scsi-1HITACHI_770131020127 ->
>> > ../../sdf
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31 scsi-1HITACHI_770131020127-part1
>> ->
>> > ../../sdf1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 scsi-1HITACHI_770131020128 ->
>> > ../../sdg
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31 scsi-1HITACHI_770131020128-part1
>> ->
>> > ../../sdg1
>> >
>> >
>> >
>> > /dev/disk # ls -la by-path/
>> >
>> > total 0
>> >
>> > drwxr-xr-x 2 root root 400 Mar 21 09:31 .
>> >
>> > drwxr-xr-x 5 root root 100 Mar 21 09:31 ..
>> >
>> > lrwxrwxrwx 1 root root  16 Mar 21 09:31
>> pci-0000:0b:08.0-cciss-0:40000000 ->
>> > ../../cciss/c0d0
>> >
>> > lrwxrwxrwx 1 root root  18 Mar 21 09:31
>> > pci-0000:0b:08.0-cciss-0:40000000-part1 -> ../../cciss/c0d0p1
>> >
>> > lrwxrwxrwx 1 root root  18 Mar 21 09:31
>> > pci-0000:0b:08.0-cciss-0:40000000-part2 -> ../../cciss/c0d0p2
>> >
>> > lrwxrwxrwx 1 root root  18 Mar 21 09:31
>> > pci-0000:0b:08.0-cciss-0:40000000-part3 -> ../../cciss/c0d0p3
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 pci-0000:16:00.0-scsi-0:0:0:0 ->
>> > ../../sda
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> pci-0000:16:00.0-scsi-0:0:0:0-part1
>> > -> ../../sda1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 pci-0000:16:00.0-scsi-0:0:0:1 ->
>> > ../../sdb
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> pci-0000:16:00.0-scsi-0:0:0:1-part1
>> > -> ../../sdb1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 pci-0000:16:00.0-scsi-0:0:0:2 ->
>> > ../../sdc
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> pci-0000:16:00.0-scsi-0:0:0:2-part1
>> > -> ../../sdc1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 pci-0000:16:00.0-scsi-0:0:0:3 ->
>> > ../../sdd
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> pci-0000:16:00.0-scsi-0:0:0:3-part1
>> > -> ../../sdd1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 pci-0000:16:00.0-scsi-0:0:0:4 ->
>> > ../../sde
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> pci-0000:16:00.0-scsi-0:0:0:4-part1
>> > -> ../../sde1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 pci-0000:16:00.0-scsi-0:0:0:5 ->
>> > ../../sdf
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> pci-0000:16:00.0-scsi-0:0:0:5-part1
>> > -> ../../sdf1
>> >
>> > lrwxrwxrwx 1 root root   9 Mar 21 09:31 pci-0000:16:00.0-scsi-0:0:0:6 ->
>> > ../../sdg
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> pci-0000:16:00.0-scsi-0:0:0:6-part1
>> > -> ../../sdg1
>> >
>> >
>> >
>> >
>> >
>> > ls -la by-uuid/
>> >
>> > total 0
>> >
>> > drwxr-xr-x 2 root root 600 Mar 21 09:31 .
>> >
>> > drwxr-xr-x 5 root root 100 Mar 21 09:31 ..
>> >
>> > lrwxrwxrwx 1 root root  10 Mar 21 09:31
>> 13026f96-7d44-4509-aa94-35395448e110
>> > -> ../../dm-2
>> >
>> > etc etc listing all the LVM logical volumes
>> >
>> >
>> >
>> >
>> >
>> > So, while to migrate off of Suse, I only need the max. amount of storage
>> > each box is using, it is driving me up the wall that I can’t say
>> > /dev/SDDLmag1 ==/dev/sda or that /dev/sdb is used by VG foo and has lv
>> bar
>> > on it.
>> >
>> >
>> >
>> > You all gave me such good advice on how to start mapping the info so I
>> could
>> > set up a target server to migrate to, I figured I would bounce this off
>> of
>> > the list.  Most of the employees here are Unix so they don’t know Linux
>> dev
>> > mappings (which is why they hired me, but I have not worked with FC).
>> >
>> >
>> >
>> > Please include me in the reply so I can read them at work.
>> >
>> >
>> >
>> >  Sincerely,
>> >
>> >
>> >
>> > Damon Chesser
>> >
>> > <damon at damtek.com>damon at damtek.com
>> >
>> >
>> >
>> >
>> >
>> > ________________________________
>> > Disclaimer: This electronic transmission and any attachments contain
>> > confidential information belonging to the sender. This information may
>> be
>> > legally protected. The information is intended only for the use of the
>> > individual or entity named above. If you are not the intended recipient
>> or
>> > receive this message in error, you are hereby notified that any
>> disclosure,
>> > copying, distribution or taking of any action in reliance on or
>> regarding
>> > the contents of this information is strictly prohibited. Please notify
>> the
>> > sender immediately if you have received this information in error.
>> >
>> > <http://www.acsi2000.com>www.acsi2000.com
>> >
>> > _______________________________________________
>> > Ale mailing list
>> > <Ale at ale.org>Ale at ale.org
>> > <http://mail.ale.org/mailman/listinfo/ale>
>> http://mail.ale.org/mailman/listinfo/ale
>> > See JOBS, ANNOUNCE and SCHOOLS lists at
>> > <http://mail.ale.org/mailman/listinfo>
>> http://mail.ale.org/mailman/listinfo
>> >
>> >
>>
>>
>>
>> --
>> Greg Freemyer
>> Head of EDD Tape Extraction and Processing team
>> Litigation Triage Solutions Specialist
>>  <http://www.linkedin.com/in/gregfreemyer>
>> http://www.linkedin.com/in/gregfreemyer
>> CNN/TruTV Aired Forensic Imaging Demo -
>>
>> <http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/>
>> http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/
>>
>> The Norcross Group
>> The Intersection of Evidence & Technology
>>  <http://www.norcrossgroup.com>http://www.norcrossgroup.com
>>
>> _______________________________________________
>> Ale mailing list
>>  <Ale at ale.org>Ale at ale.org
>>  <http://mail.ale.org/mailman/listinfo/ale>
>> http://mail.ale.org/mailman/listinfo/ale
>> See JOBS, ANNOUNCE and SCHOOLS lists at
>>  <http://mail.ale.org/mailman/listinfo>
>> http://mail.ale.org/mailman/listinfo
>>
>
>
>
> --
> --
> James P. Kinney III
> I would rather stumble along in freedom than walk effortlessly in chains.
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> http://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
>


-- 
-- 
James P. Kinney III
I would rather stumble along in freedom than walk effortlessly in chains.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.ale.org/pipermail/ale/attachments/20110419/7d7e737a/attachment-0001.html 


More information about the Ale mailing list