SUSE Enterprise Storage 2.0 via iSCSI | SUSE Communities

SUSE Enterprise Storage 2.0 via iSCSI


With the announcement of SUSE Enterprise Storage (SES) 2.0 earlier in October came big changes and updates for our Ceph-based software defined storage solution.  One of those changes include support for iSCSI, which allows non-Enterprise Linux Servers (Linux, UNIX, Windows) to access block storage made available from the SUSE Enterprise Storage Cluster to provide a true heterogeneous solution.  David Byte of SUSE has written a more detailed post on the enhancements SES 2.0 has received at  I had the chance to delve into some of the technical sessions from SUSECon 2015, and one of note was the Ceph iSCSI Gateway presentation by David Disseldorp and Lee Duncan, as I have been working on several projects to test interoperability and performance against SES 2.0 and various partner software.

Screenshot from 2015-11-18 19:20:23

The following is a guide for creating an iSCSI target using the bundled utilities included in SES 2.0, and connecting to it via a remote server (iSCSI initiator).


This guide will assume that a SES 2.0 cluster has already been set up (minimum 3 nodes).  For more information on deploying an SES cluster, please check out the admin guide at  Also ensure that the ceph-common software pattern (which is part of the SES 2.0 installation) is installed on the system as it contains essential iSCSI configuration utilities.

Getting Started

A block device is needed in order to create an iSCSI target.  To create this block device, the ceph cluster needs to have available storage.  Let’s check the status of the ceph cluster first by running the command ceph -k <admin.keyring> status

Screenshot from 2015-11-18 13:14:29

The status above shows there is 1354 GB available on the cluster.  In the event storage needs to be allocated to the ceph cluster, a block device can be allocated to ceph storage by using the command ceph-deploy osd create <ceph node>:<device name>.  Again, these SES configuration steps are defined in the admin guide.  With ceph storage available at our disposal, let’s create a block device using the rados block device (rbd) utility.   The rbd utility facilitates the storage of block-based data in a ceph cluster, with a linux kernel client and a QEMU/KVM driver.  Run the command  rbd -k <admin.keyring> create –size 100000 demo, to create a 100 GB rados block device, then run the command rbd – k <admin.keyring> ls -l to list details of the created block device.

Screenshot from 2015-11-18 13:33:51

The next step is to map the rbd device.  Root user privilege will be required.  Run the command sudo rbd map demo.

Screenshot from 2015-11-18 13:37:37

At this point, we can choose to create a file system on the device.  Run the command sudo mkfs.xfs /dev/<rbd device>.  Note, this step can be done at a later time.

Screenshot from 2015-11-18 13:41:49

The next step is to create an iSCSI target from this block device.

Using targetcli to create iSCSI target

Note:  This tutorial focuses on creating an iSCSI target using the targetcli utility. Information on using  other iSCSI configuration utilities such as lrbd (a wrapper/enhancement for targetcli) is available at

Run the command sudo targetcli and follow the general guidelines below for configuring the iSCSI target.

ceph@admin2:~> sudo targetcli
targetcli 2.1-suse (rtslib 2.2-sle12)
Copyright (c) 2011-2014 by Datera, Inc.
All rights reserved.

/> cd backstores/iblock
/backstores/iblock> create demo /dev/rbd/rbd/demo
Generating a wwn serial.
Created iblock storage object demo using /dev/rbd/rbd/demo.
/backstores/iblock> cd demo
/backstores/iblock/demo> status
Status for /backstores/iblock/demo: /dev/rbd/rbd/demo deactivated
/backstores/iblock/demo> cd /iscsi
//create a iSCSI target
/iscsi> create
Created target
Selected TPG Tag 1.
Created TPG 1.
/iscsi> cd
/iscsi/iqn.20….93d87c3aa548> cd tpg1
/iscsi/iqn.20…7c3aa548/tpg1> cd luns
/iscsi/iqn.20…548/tpg1/luns> status
Status for /iscsi/ 0 LUNs

/iscsi/iqn.20…548/tpg1/luns> create /backstores/iblock/demo
Selected LUN 0.
Created LUN 0.
/iscsi/iqn.20…548/tpg1/luns> cd ../portals
//create a network portal with the host/target IP (this is the IP an iSCSI initiator will connect)
/iscsi/iqn.20…/tpg1/portals> create
Using default IP port 3260
Created network portal
/iscsi/iqn.20…/tpg1/portals> cd ..
// for demoing purposes, we will ease up on the security
/iscsi/iqn.20…7c3aa548/tpg1> set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1
Parameter demo_mode_write_protect is now ‘0’.
Parameter authentication is now ‘0’.
Parameter generate_node_acls is now ‘1’.
Parameter cache_dynamic_acls is now ‘1’.
/iscsi/iqn.20…7c3aa548/tpg1> exit
There are unsaved configuration changes.
If you exit now, configuration will not be updated and changes will be lost upon reboot.
Type ‘exit’ if you want to exit anyway: exit

Using iscsiadm to discover and connect to the iSCSI target

On the remote server/iSCSI initiator, run the command iscsiadm -m discovery -p <iSCSI Target IP> -t sendtargets

Screenshot from 2015-11-18 14:07:13

Connect to the appropriate lun by running the command iscsiadm -m node -T <iSCSI target lun> -p <iSCSI Target IP> –login

Screenshot from 2015-11-18 14:09:45

Run the command dmesg | grep sd to ensure the iSCSI initiator connected to the target, and that the device has been attached to the system.  Here, we can see the device has been attached to /dev/sdg.

Screenshot from 2015-11-18 14:12:06

With your iSCSI target and iSCSI initiator now set up, you can then mount the iSCSI connection as if a hardware device is attached to your system.

Screenshot from 2015-11-18 14:13:26

For any further inquiries, please reach out to me at  Thanks for reading!

(Visited 1 times, 1 visits today)


  • mmokhtar says:

    Very informative article. However if i understand correctly it did not make use of the new iscsi gateway mentioned at top of article,(David Disseldorp’s presentation) which among other things bypasses the block devices and uses LIO’s rbd backend. It would be nice if there will be follow up articles that discuss this.

  • tchong says:

    Thank you for the feedback.

    There is a comprehensive guide for lrbd/iscsi gateway at

    Will be posting a follow up soon with some interesting results.

  • Leave a Reply

    Your email address will not be published.