SUSE Conversations


Adding and Removing SAN Disks from SUSE Device Manager Multipath Systems



By: pcrooker

January 23, 2009 6:18 pm

Reads:5700

Comments:2

Rating:3.0

Add a new disk

This assumes you have created an array on the SAN and allocated space to a logical volume on it; you have mapped a LUN pointing that logical volume to that host, and that the host is correctly zoned to see the SAN in the fibre channel fabric.

  • Before anything, run multipath -ll to see what is there currently.
  • See how many HBAs are connected (and zoned) to the SAN – you need to repeat the commands for each one. For example:
              echo 1 > /sys/class/fc_host/host0/issue_lip
              echo 1 > /sys/class/fc_host/host1/issue_lip
              echo "- - -" > /sys/class/scsi_host/host0/scan
              echo "- - -" > /sys/class/scsi_host/host0/scan
  • After running those commands, check that something happened by using dmesg and /var/log/messages.
  • Run multipath -v2 to get multipath to pick it up – you can then compare the listing to the previously run command.
    Note the scsi devices for the new disk, it will be sdX and sdY or whatever.
  • Edit /etc/lvm/lvm.conf and make sure these are being filtered to remove duplicates – use vgdisplay -vv to show what LVM considers duplicate.
    FYI: device mapper / multipath create multiple device handles for the same device, this can cause delays with LVM2 and severely impact throughput.
  • Now you can pvcreate /dev/dm-XX, vgextend VolGroup dev/dm-XX, etc.

Remove a disk

  • Run the multipath -ll command, note the UUID (the big hex number), LUN and sdX device of the disk, eg in the example below it is LUN 2:, and they are /dev/sdf and /dev/sde – you will need this info for the procedure. Just confirm this is in fact the one you want to remove – cross-check the LUN and size of the volume on the SAN before proceeding…
          3600a0b80000fb6e50000000e487b02f5 dm-10 IBM,1742
          [size=1.6T][features=1 queue_if_no_path][hwhandler=1 rdac]
          \_ round-robin 0 [prio=6][active]
          \_ 1:0:0:2 sdf 8:80 [active][ready]
          \_ round-robin 0 [prio=1][enabled]
          \_ 0:0:0:2 sde 8:64 [active][ghost]
    Note – the dm-XX is not permanent and may change when you’ve added or removed disks, so don’t rely on old info – check each time.
  • Also cat /proc/scsi/scsi to see match up the kernel SCSI devices with the SCSI IDs and LUNs of the SAN disks.
  • First you need to remove the disk from the volume group.
    • If the disk is in use, either delete what is on it (if there is a logical volume limited to that disk), or use pvmove. (this of course assumes you have sufficient space to move everything off the original disk)
      NB – with pvmove on SUSE10 sp2, there is a bug where if you are moving from a bigger disk to smaller disk(s), it may complain there isn’t enough space. Just move as many extents as are on the first smaller disk, the you can move the rest on to the second, eg: pvmove /dev/dm-1:0-20000 /dev/dm-2.
    • Once stuff is deleted/removed: vgreduce VolGroup dev/dm-XX and pvremove /dev/dm-XX.
  • Using the disk ID for the next command (the dm-xx isn’t recognised), from multipath -ll:
    dmsetup remove 3600a0b80000f7b270000000b47b15c26. (of course you need to use your own disk ID)
  • Now, finally, you remove the SCSI devices from the kernel:
      echo 1 > /sys/block/sdX/device/delete
      echo 1 > /sys/block/sdY/device/delete

You should now have all traces removed, you can run multipath -ll and cat /proc/scsi/scsi to cross check. You can now remove the mapping from the SAN and delete the logical volume if required.

VN:D [1.9.22_1171]
Rating: 3.0/5 (1 vote cast)
Adding and Removing SAN Disks from SUSE Device Manager Multipath Systems, 3.0 out of 5 based on 1 rating

Tags: ,
Categories: SUSE Linux Enterprise Desktop, SUSE Linux Enterprise Server, Technical Solutions

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

2 Comments

  1. By:iqbalkhi

    Dear pcrooker,
    Good day,
    the information here is very useful,
    i want to add a SAN disk. But my system is already using evms. (someone else b4 me installed this server).
    - what other steps to take to create a new evms disk?
    - This is production system.
    - do i need downtime of server for all this activity or not?
    - how will it effect my running system?

    - please also guide me the commands to use pv, vg n lv to finally mounting n using the volume.
    Thanks for ur help in advance :).
    Iqbal

  2. By:pcrooker

    First, don’t mess about with a production system. Never. Install a second computer as a test system with a similar evms setup, zone it to the SAN, allocate a SAN LUN to it and experiment. You can play around then, screw up, it won’t matter to production.

    From the one system I setup using both evms and lvm, evms is transparent to lvm (I gather you are still using lvm as you ask about the commands), in other words you don’t have to mess with evms, just lvm. So just try it on your test system without touching evms and see what happens.

    In case you do need to do something w evms – I had a look in /etc/evms.conf (this is SLES10.3) I see there are two areas of interest: a multipath section where you tell it to ignore the SAN devices (sde and sdi in the above example) so it only uses the dm device, and the “legacy devices” section where you also exclude the SAN devices. You’ll need to try various combinations to see what works.

    As to pv, and lv, the basic process is:
    – you must run pvcreate against the dm device
    – then you create the volume group with that device
    – then create a logical volume in that volume group
    – then make a file system on that logical volume.

    Here is a listing of a multipathed device:

    3600a0b80000fb6e5000000224b298b47 dm-10 IBM,1742
    [size=1.2T][features=1 queue_if_no_path][hwhandler=1 rdac][rw]
    \_ round-robin 0 [prio=6][active]
    \_ 5:0:0:1 sde 8:64 [active][ready]
    \_ round-robin 0 [prio=1][enabled]
    \_ 3:0:0:1 sdi 8:128 [active][ghost]

    The scsi devices, sde and sdi are what is presented by the SAN via two paths in this case, device manager creates the logical device, dm-10, to allow it to switch between paths. The scsi devices, sde and sdi are locked by device manager and cannot be used, you must use either the /dev/dm-10 device or the ID device:
    /dev/disk/by-id/scsi-3600a0b80000fb6e5000000224b298b47.

    Here are the steps:

    # pvcreate /dev/dm-10
    (you’ll see some message that it was created successfully)
    # vgcreate whatever_vg /dev/dm-10
    (similar message)
    # lvcreate -L 10G -n my_lv whatever_vg
    (similar message)
    # mkfs.ext3 /dev/whatever_vg/my_lv
    (creates file system)
    # mkdir /my_filesystem
    # mount /dev/whatever_vg/my_lv /my_filesystem

    When you finally get it working, please put your results here so others can benefit.

Comment

RSS