Integration is the name of the game when it comes to software-defined infrastructure, and tools that take away much of the pain area ideal for any fast-moving IT shop. When it comes to connecting OpenStack with Ceph storage, SUSE has integrated tools that make it a snap.
SUSE OpenStack Cloud Crowbar 9 offers users simple graphical or command-line options to make SUSE Enterprise Storage the target for Cinder, Cinder Backup, Glance and Nova using Ceph’s built-in gateways. That means ready, scalable access to block, file and object storage for all your OpenStack needs.
The process is dramatically simplified by taking advantage of Salt automation to create the Ceph pools, users and the keyrings needed to create resources OpenStack can use. The Salt runner also outputs a YAML file with everything OpenStack needs to connect and configure your SUSE Enterprise Storage resources.
Start with a SUSE OpenStack Cloud Crowbar 9 Admin node
One of the easiest ways to deploy Openstack is using SUSE’s Crowbar process. You start with a plain SUSE Enterprise Linux Server 12 SP4 host with the Crowbar extension and the cloud_admin pattern. Crowbar will then automate the deployment of your cluster, including network configurations. It will also enable you to add compute, controller and other OpenStack worker nodes via PXE boot.
The key to getting SUSE OpenStack Cloud working with SUSE Enterprise Storage is making sure your admin node has a network interface that can access the different subnets and components that make up the OpenStack cluster and your Ceph cluster. Here are the defaults Crowbar wants to create:
Notice that Crowbar creates five different subnets, including five VLANs (100, 200, 300, 400 and 500 in the default example). Presuming your Crowbar admin node has eth0, these networks will automatically be created. You can edit them via YaST or make more complex configurations in /etc/crowbar/network.json.
This example uses the pseudo-graphical YaST interface because the SUSE OpenStack Cloud Admin node is running in text-only mode. In this example, the 192.168.126.0/24 subnet is the public network, but you can also add a bastion network in order to access Crowbar from elsewhere on your LAN.
If your SUSE Enterprise Storage cluster is on a different subnet, be sure to add it in the Crowbar network configuration, not in the admin node’s network configuration. Crowbar will overwrite those network settings with its own.
Review your network settings carefully before saving. An error here can break the communication you need for Ceph and the OpenStack integrations.
Prepare your SUSE Enterprise Storage cluster
When you deploy a Ceph cluster with SUSE Enterprise Storage, Salt automates the key configurations, ensuring you have a solid, reliable base. That same Salt capability creates the resources you need by applying a script that sets up Ceph pools and outputs the YAML you need for Crowbar. If you don’t have a cluster, check out this related SUSE Guide, “Deploy a fully functional SUSE Enterprise Storage cluster test environment in about 30 minutes”.
Make sure your SUSE Enterprise Storage cluster and your SUSE OpenStack Cloud Admin node can communicate. You can do this by pinging one from the other.
If you have something else in mind – or already have your SUSE Enterprise Storage cluster on a different subnet, just be sure to configure that storage subnet on the Crowbar admin server before deploying SUSE OpenStack Cloud. Once you run systemctl start crowbar-init, you won’t be able to go back and edit any of the Crowbar network settings.
Run the integration
With your SUSE Enterprise Storage cluster up and running, login to the master node as root and execute this Salt command. It will automatically create the OpenStack pools and users you need based on your storage cluster’s configuration.
root # salt-run --out=yaml openstack.integrate prefix=mycloud
This will create resources in your storage cluster with the “mycloud” prefix, which you can modify to suit your needs. After a few moments, the command will create a custom YAML file with content that looks something like this:
ceph_conf: cluster_network: 10.128.1.0/24 fsid: 1f7d1d8f-7dd4-4b13-ba2c-8af39ec358f6 mon_host: 10.128.1.23, 10.128.1.21, 10.128.1.22 mon_initial_members: mon1, mon2, mon3 public_network: 10.128.1.0/24 cinder: key: AQAnOGtdAAAAABAAgV21v4dnchFxMnPYm6TA5Q== rbd_store_pool: mycloud-cloud-volumes rbd_store_user: mycloud-cinder cinder-backup: key: AQAoOGtdAAAAABAAJjiRTEFC2ccgDugBdhoD/g== rbd_store_pool: mycloud-cloud-backups rbd_store_user: mycloud-cinder-backup glance: key: AQAnOGtdAAAAABAARaQ5XH0O6R+7Vz92b4cgQA== rbd_store_pool: mycloud-cloud-images rbd_store_user: mycloud-glance nova: rbd_store_pool: mycloud-cloud-vms radosgw_urls: - http://10.128.1.21:80/swift/v1 - http://10.128.1.22:80/swift/v1 - http://10.128.1.23:80/swift/v1
Notice that the IP addresses in the example are on a different subnet than the ones defined above. In this case, the 10.128.1.0/24 is made routable using an external gateway so the OpenStack admin node and the Ceph cluster can communicate.
Save this information to a file and proceed to the next step.
Import configuration into Crowbar
With the SUSE Enterprise Storage configuration file in-hand, it’s now a simple matter of importing it into Crowbar for use with your OpenStack cluster. In Crowbar, navigate to Utilities → SUSE Enterprise Storage Configuration. Browse for the YAML file you created and upload it.
The updated page shows your configurations for Cinder, Cinder Backup, Glance and Nova. Now, when you run the Barclamps to install those services on your OpenStack cluster, your Ceph cluster resources are automatically made available. If you need to make changes, you can remove the configuration, make edits and just rerun the Barclamps that configure your services.