SUSE OpenStack Cloud 5 on Ravello Systems | SUSE Communities

SUSE OpenStack Cloud 5 on Ravello Systems


SUSE Cloud and Ravello Systems, rapid and easy deployment of OpenStack

Earlier this year, our team released SUSE OpenStack Cloud version 4 for demonstration on Ravello Systems, a public cloud solution that implements some really cool technologies. We have just recently updated to SUSE OpenStack Cloud version 5 (based on the OpenStack JUNO release) on Ravello, and with this new update adds new technical features, such as:

  • SLES 12 controller and compute nodes
  • Ceph distributed object-, block- and file-level storage
  • Integration with SUSE Enterprise Storage (SES)
  • Additional networking functionality
  • Improved operational efficiency and management tooling
  • Simplified services deployment

Ravello Systems leverages their nested virtualization technology to run your existing virtual machine workloads completely unmodified in a public cloud environment, such as Amazon Web Services or Google Compute Engine, for development and test. This technology makes deploying an OpenStack environment easy, quick, reusable, and ideal as a demonstration sandbox. In addition, Ravello can also run hypervisors like KVM inside virtual machines on AWS or Google Cloud Platform.

Getting Started

Once logged into the Ravello GUI, visit the blueprint repo at, and search for ‘SUSE’. The SUSE OpenStack Cloud 5 blueprint will be shown. Press the ‘ADD TO LIBRARY’ button, in order to add the blueprint to your account for use.


Login to the Ravello Systems, and proceed into Library->Blueprints.


Select the blueprint, and the blueprint canvas will be shown. Click the orange ‘Create Application’ button. An application blueprint will then be shown.


These nodes are ready to be deployed and no changes are needed. SUSE OpenStack Cloud 5 Infrastructure is shown below just to give you and idea of what has been done.



Furthermore, SUSE OpenStack Cloud is deployed to four different types of machines:

  • One Administration Server for node deployment and management
  • One or more Control Nodes hosting the cloud management services
  • Several Compute Nodes on which the instances are started
  • Several Storage Nodes for block and object storage

This Blueprint has one admin, one control, and one compute node available but you can add as many compute nodes as necessary.

Click the orange ‘Publish’ button. Several options will be presented, such as choosing which public cloud to deploy the blueprint on, and setting a time limit for how long the instances will run, as well as details on pricing by the hour. Once you are satisfied with your choices, click the orange ‘Publish’ button, and your blueprint instances will begin to launch. It will take some time for the instances to finish booting. Wait until the instance icon turns green. Furthermore, you can monitor booting progress of each instance via the console button at the bottom right side.


Once the instance is fully booted up, connect to the crowbar IP and port of the admin node at http://<admin-node-crowbar-IP>:3000.


A prompt asking for user name and password will be shown, enter ‘crowbar’ for both user name and password. The Crowbar console will then be shown. Three nodes are displayed here, as corresponding to the three instances running in the blueprint. Wait until all three nodes are green.


If the nodes are taking a long time, you may try rebooting any node other than the admin node, then wait for the node to turn green in the Crowbar console.


Click the Barclamps->All Barclamps link at the top right. Shown here are various barclamps (services) that have been deployed on your SUSE OpenStack Cloud. You may add additional services here, such as an object store via Swift or Ceph, but note that additional instances will need to be deployed to run those services.


For example, Ceph requires three nodes, so three new instances will need to be added, and the SUSE Cloud Admin node will then need to provide pxe-boot installation of SLES 12 on those instances. The three nodes will then be added to the cluster and allocated for Ceph storage. For more information, check out the SUSE OpenStack Cloud 5 deployment guide.

Let’s go back to the Ravello canvas. Highlight the Controller node and select the IP address corresponding to Horizon (which is the OpenStack console service).


Direct your web browser to http://<controller-node-horizon-IP>. Login with the user name ‘crowbar’ and password ‘crowbar’. Additionally, there is an admin user with additional admin privileges with user name ‘admin’ and password ‘crowbar’. For now, login as crowbar. The compute resources overview page will be shown. This console allows you to create, remove, manage, and monitor cloud computing resources such as virtual machines, storage, networking, and security.


Click the Compute->Instances link. We have already created a instance for you that is ready to be run. Click the ‘Start Instance’ button on the right, and a VM based on SLES 11 SP3 will be launched.


Notice there are two IPs associated with the instance. The is floating IP that we have tied to a network interface via Ravello. This means you can access this instance externally once it has finished booting. Let’s access the VNC console first. Once the instance’s power state is ‘Running’, click on the down arrow icon on the right, then the ‘Console’ link.


You will be directed to the Instance Console page. In a baremetal environment, the instance’s hardware console would be shown. However, the VNC window is pointing to an internal IP that the web browser cannot access. Open the ‘Click here to show only console’ link in a new tab.


Notice the IP address being pointed to in the link is an internal IP. We have already routed this internal IP to an external network interface via Ravello’s networking infrastructure. Go back to the Ravello console and replace the address in the console browser link with the VNC IP address on the Controller node.



From this VNC instance console, you can manipulate and work with your VM directly. To login, the root password is ‘SUS3*2015’ Optionally, you may also connect to the instance via SSH. Go back to the Ravello console, and connect to the SSH IP address on the controller node.



This lab demonstrated how easy and quickly an OpenStack cloud can be deployed, by combining the SUSE’s management tooling and automation, and Ravello Systems’ implementation of the public cloud. By simply selecting the SUSE OpenStack Cloud blueprint, and launching it, you can be up and running with a fully operational OpenStack cloud environment within minutes.

Adding Services and Scaling out

SUSE Cloud also offers ephemeral storage for images attached to instances. These ephemeral images only exist during the life of an instance and are deleted when the guest is terminated. Block and object storage are persistent storage models where files or images are stored until they are explicitly deleted.

A blueprint can be modified, with cloud machines added and removed as needed. In this exercise, we will add another compute node to the blueprint to serve as a dedicated object store.

The Swift barclamp will need to be enabled, but before we do that, we will need to add an additional node. Go back to the Ravello console. Press the grey + button on the top left corner, and a list of cloud machines will be shown. Select the ‘Empty Cloud Image’ and drag it on to the canvas.


A few things will need to be added to the image. For this example, we can disable Cloud Init under the ‘General’ Tab.


Under the Disks tab, change the existing hard drive name to ‘hd0’ and controller to ‘IDE’. This bootable hard drive will hold the operating system. Another hard drive will need to be added for the object store. Add another drive by clicking the ‘+Add’ link at the top, and selecting the ‘Add Disk’ in the drop-down list. Name this hard drive ‘hd1’, with a size of 50 GB and ‘IDE’ as controller. This node will need to PXE boot access the admin node. In order to enable PXE boot, click the ‘+Add’ link again and click ‘Add CD-ROM’ from the drop-down list. Click the ‘Browser’ button and select the ipxe.iso image.


Save the changes and click the ‘Network’ tab. All of the SUSE Cloud nodes must access the admin server via the internal admin network, which sits on 192.168.124.*. Enable Auto MAC on the existing network card. Enable Static IP. Input ‘’ as the Static IP, ‘’ as the netmask, ‘’ as the gateway, and ‘’ as the DNS. Select ‘Public IP’. Save the configuration.


The cloud node has been configured and is ready for deployment. Press the orange ‘Update’ button on the top left corner, and a configuration dialog will be shown similar to the first time you launched the blueprint. Wait for the image to deploy and monitor the progress via the console.




After the instance has been PXE booted, the console should show that the new node has been discovered by the admin node.


Go back into the Crowbar console, and notice the additional new node within the cluster, denoted by the flashing yellow icon beside the hostname. Click on the Nodes->Bulk Edit link at the top.


This ‘Bulk Edit’ configuration page allows us to assign newly discovered nodes to specific roles, as well as which operating system to install. Rename the new node’s alias to ‘storage’, group to ‘storage’, Intended role as ‘Storage’, platform as ‘SLES 11 SP3’, and click the checkbox under the ‘Allocate?’ column. Click ‘Save’.



Installation of the new node will now continue. You can monitor the installation by going back into the new node’s console. Also check the status of the storage node via the crowbar console. The node should turn green once it has been deployed properly.



Click the Barclamps->All Barclamps link. The barclamps services page will be shown. Scroll down to the Swift barclamp and click the ‘Create’ button.


The Swift barclamp will create a Swift object store. Since we only have one dedicated storage node available, decrease the Zones attribute to 1. In a more traditional OpenStack deployment, multiple storage nodes should be allocated towards a Swift barclamp for added redundancy. Scroll down to the bottom of the page and make sure the storage node is allocated to the ‘swift-storage’ section, and that the other swift components are leveraging the controller node. Press the ‘Apply’ button.



The proposal should be applied successfully after a few minutes.


Log back into the OpenStack co n the left-side menu should appear called ‘Object Store’. Click on the link ‘Containers’, and click the ‘+ Create Container’ link.


You will then be prompted to create a container. Create a container called ‘test’, and set access to Public. You will now be able to upload files to this object store.


Ravello Systems’ demonstrates the ease of scaling-out an OpenStack environment, while SUSE OpenStack Cloud tooling demonstrates the ease of setting up and rapid deployment of essential cloud services such as persistent object storage.

If you have any questions regarding this lab, Ravello Systems, or SUSE OpenStack Cloud, please contact me at


(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published.

No comments yet