SUSE Enterprise Storage 2.0 via iSCSI
With the announcement of SUSE Enterprise Storage (SES) 2.0 earlier in October came big changes and updates for our Ceph-based software defined storage solution. One of those changes include support for iSCSI, which allows non-Enterprise Linux Servers (Linux, UNIX, Windows) to access block storage made available from the SUSE Enterprise Storage Cluster to provide a true heterogeneous solution. David Byte of SUSE has written a more detailed post on the enhancements SES 2.0 has received at https://www.suse.com/communities/blog/block-and-tackle-suse-enterprise-storage-2/. I had the chance to delve into some of the technical sessions from SUSECon 2015, and one of note was the Ceph iSCSI Gateway presentation by David Disseldorp and Lee Duncan, as I have been working on several projects to test interoperability and performance against SES 2.0 and various partner software.
The following is a guide for creating an iSCSI target using the bundled utilities included in SES 2.0, and connecting to it via a remote server (iSCSI initiator).
Prerequisites
This guide will assume that a SES 2.0 cluster has already been set up (minimum 3 nodes). For more information on deploying an SES cluster, please check out the admin guide at https://www.suse.com/documentation/ses-1/singlehtml/book_storage_admin/book_storage_admin.html. Also ensure that the ceph-common software pattern (which is part of the SES 2.0 installation) is installed on the system as it contains essential iSCSI configuration utilities.
Getting Started
A block device is needed in order to create an iSCSI target. To create this block device, the ceph cluster needs to have available storage. Let’s check the status of the ceph cluster first by running the command ceph -k <admin.keyring> status
The status above shows there is 1354 GB available on the cluster. In the event storage needs to be allocated to the ceph cluster, a block device can be allocated to ceph storage by using the command ceph-deploy osd create <ceph node>:<device name>. Again, these SES configuration steps are defined in the admin guide. With ceph storage available at our disposal, let’s create a block device using the rados block device (rbd) utility. The rbd utility facilitates the storage of block-based data in a ceph cluster, with a linux kernel client and a QEMU/KVM driver. Run the command rbd -k <admin.keyring> create –size 100000 demo, to create a 100 GB rados block device, then run the command rbd – k <admin.keyring> ls -l to list details of the created block device.
The next step is to map the rbd device. Root user privilege will be required. Run the command sudo rbd map demo.
At this point, we can choose to create a file system on the device. Run the command sudo mkfs.xfs /dev/<rbd device>. Note, this step can be done at a later time.
The next step is to create an iSCSI target from this block device.
Using targetcli to create iSCSI target
Note: This tutorial focuses on creating an iSCSI target using the targetcli utility. Information on using other iSCSI configuration utilities such as lrbd (a wrapper/enhancement for targetcli) is available at https://github.com/swiftgist/lrbd/wiki.
Run the command sudo targetcli and follow the general guidelines below for configuring the iSCSI target.
ceph@admin2:~> sudo targetcli
targetcli 2.1-suse (rtslib 2.2-sle12)
Copyright (c) 2011-2014 by Datera, Inc.
All rights reserved.
/> cd backstores/iblock
/backstores/iblock> create demo /dev/rbd/rbd/demo
Generating a wwn serial.
Created iblock storage object demo using /dev/rbd/rbd/demo.
/backstores/iblock> cd demo
/backstores/iblock/demo> status
Status for /backstores/iblock/demo: /dev/rbd/rbd/demo deactivated
/backstores/iblock/demo> cd /iscsi
//create a iSCSI target
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.admin2.x8664:sn.93d87c3aa548.
Selected TPG Tag 1.
Created TPG 1.
/iscsi> cd iqn.2003-01.org.linux-iscsi.admin2.x8664:sn.93d87c3aa548/
/iscsi/iqn.20….93d87c3aa548> cd tpg1
/iscsi/iqn.20…7c3aa548/tpg1> cd luns
/iscsi/iqn.20…548/tpg1/luns> status
Status for /iscsi/iqn.2003-01.org.linux-iscsi.admin2.x8664:sn.93d87c3aa548/tpg1/luns: 0 LUNs
/iscsi/iqn.20…548/tpg1/luns> create /backstores/iblock/demo
Selected LUN 0.
Created LUN 0.
/iscsi/iqn.20…548/tpg1/luns> cd ../portals
//create a network portal with the host/target IP (this is the IP an iSCSI initiator will connect)
/iscsi/iqn.20…/tpg1/portals> create 151.155.16.131
Using default IP port 3260
Created network portal 151.155.16.131:3260.
/iscsi/iqn.20…/tpg1/portals> cd ..
// for demoing purposes, we will ease up on the security
/iscsi/iqn.20…7c3aa548/tpg1> set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1
Parameter demo_mode_write_protect is now ‘0’.
Parameter authentication is now ‘0’.
Parameter generate_node_acls is now ‘1’.
Parameter cache_dynamic_acls is now ‘1’.
/iscsi/iqn.20…7c3aa548/tpg1> exit
There are unsaved configuration changes.
If you exit now, configuration will not be updated and changes will be lost upon reboot.
Type ‘exit’ if you want to exit anyway: exit
ceph@admin2:~>
Using iscsiadm to discover and connect to the iSCSI target
On the remote server/iSCSI initiator, run the command iscsiadm -m discovery -p <iSCSI Target IP> -t sendtargets
Connect to the appropriate lun by running the command iscsiadm -m node -T <iSCSI target lun> -p <iSCSI Target IP> –login
Run the command dmesg | grep sd to ensure the iSCSI initiator connected to the target, and that the device has been attached to the system. Here, we can see the device has been attached to /dev/sdg.
With your iSCSI target and iSCSI initiator now set up, you can then mount the iSCSI connection as if a hardware device is attached to your system.
For any further inquiries, please reach out to me at tchong@suse.com. Thanks for reading!
Comments
Very informative article. However if i understand correctly it did not make use of the new iscsi gateway mentioned at top of article,(David Disseldorp’s presentation) which among other things bypasses the block devices and uses LIO’s rbd backend. It would be nice if there will be follow up articles that discuss this.
Thank you for the feedback.
There is a comprehensive guide for lrbd/iscsi gateway at https://github.com/swiftgist/lrbd/wiki
Will be posting a follow up soon with some interesting results.