Deploying SUSE OpenStack Cloud 7 in a lab environment
Interested in SUSE Openstack Cloud but you’re not very sure about how to get started? Nothing better than to set up your own lab and start poking around. This is a short guide about how to do so.
Host Requirements
Before we start, make sure that your host:
- Is using wicked to handle the networking. This can be set up through the command “yast2 network”. On “Global Options” tab, under the “Network Setup Method”, select “Wicked Service”.
- Has IP forwarding enabled. To turn it on, change the value from “0” to “1” on line “net.ipv4.ip_forward =” on /etc/sysctl.conf (this will make your changes permanent). You can also set this on the fly with the command “echo 1 > /proc/sys/net/ipv4/ip_forward”
- Has firewall disabled. You can turn It off with the command “systemctl stop SuSEfirewall2” and “systemctl disable SuSEfirewall2”.
- Has AppArmor disabled. This is not mandatory, but recommended. You can turn it off by following the documentation described here: https://www.suse.com/documentation/sled11/book_security/data/sec_aaintro_enable.html
- You have KVM host fully installed and implemented with virt-manager. In order to have it all set up and working, run “sudo sudo zypper in -t pattern KVM Host Server” and “ sudo sudo zypper in -t pattern KVM Virtualization Host and tools”. It might also be possible to use different hypervisors but they won’t be covered here.
Host Network Configuration
Create the necessary network with the following commands:
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Explanation: This is a typical iptables rule used when you want to share a single Internet connection among computer on a network. With this command, we are creating a network translation/forwardind rule (“-t nat”), associated with the “-A POSTROUTING -o eth0” hook point, which basically means “apply this rule right before the packages leave the network interface (POSTROUTING)” by sending the whole output traffic (-o) to the interface “eth0”. The “-j MASQUERADE” extension provides additional logic that deals with the possibility that the network interface could go off line and come back up again with a different address.
Note that the “eth0” is the interface that make communication to external networks (internet). If you use a different interface name, you should change the name accordingly. You could also apply this rule by pointing to the bridge interface created by virt-manager when you are implementing networking on your VMs by forwarding traffic as a NAT so they can access the internet. This should work too but it also might bring you some troubles though, because the virt-manager also implements some other policies on that interface, which might give us problems when we are applying a new policy on top of it. By default this bridge interface is called “virbr0”, but you can change the interface name to anything you like over virt-manager. Normally, the IP range on this bridge start with 192.168, which is also the default one created by virt-manager. Remember that if you changed the IP range to anything else, you will face an error when loading crowbar for the first time. Once you open the wen interface for the first time, it will complain that the file /etc/crowbar/network.json does not contain a valid IP range of 192.168.124.0/24, so you will also have to edit the file to make it work.
brctl addbr cloud1br
ip addr add 192.168.124.1/24 brd 192.168.124.255 dev cloud1br
Explanation: Here we are simply creating a new bridge interface called “cloud1br” and assigning the IP address “192.168.124.1/24” to it with the broadcast address of “192.168.124.255”. In a general way, Linux bridges will forward ethernet segments together independently of the protocol. Packets are forwarded based on the Ethernet address rather than IP.
ip link add link cloud1br name cloud1br.300 type vlan id 300
ip link add link cloud1br name cloud1br.500 type vlan id 500
Explanation: These commands will create two vlans (300 and 500) and associate the interface “cloud1br” to it. This means that this interface will get/set flags with VLAN ids 300 and 500 to input and output frames. The network separation here will be used to distinguish between admin, controller, and public network traffic.
ip addr add 192.168.124.1/24 brd 192.168.124.255 dev cloud1br.300
ip addr add 192.168.126.1/24 brd 192.168.126.255 dev cloud1br.500
Explanation: With these commands, we set the network address for the interfaces “cloud1br.500” with IP 192.168.124.1/24 and “cloud1br.300” with IP 192.168.126.1/24 which we just created previously.
ip link set dev cloud1br up
ip link set dev cloud1br.300 up
ip link set dev cloud1br.500 up
Explanation: Here we are just turning the interfaces on. Note that these commands won’t make the configuration permanent, so you will need to type them again on every reboot. If you want to set them permanently, you can add the commands listed above to /etc/init.d/after.local. This way, the system will load them automatically on every boot. A colleague at work said that this method is “ugly”… well, if you don’t mind ugly things stick with it, otherwise, you might use Yast to configure your interfaces.
Admin node VM deployment
Now, we can create our first VM, which will be used as the admin node. The VM should have 20gb of space and 4gb of RAM. Now boot it with SLES12 ISO image and configure the network as follow:
After registering your system, make sure to have the SUSE OpenStack Cloud 7:
Now, configure your NTP server/daemon to start on boot. Select your region and click on “Other Setting” button.
Now select “Synchronize with NTP Server” and “Run NTP as daemon”
After, turn off your firewall. And click on “Software” and select the “Meta package for patterns cloud_admin”
After the system installation is finished, boot into it and download the SUSE OpenStack Cloud 7 image and SLES12SP2 image.
After, mount the SLES12 image and copy its content to the install directory to start the installation as described with the commands below. This procedure is necessary to create a directory for the image which will be used in the future when we are spawning new nodes through the PXE boot process. This procedure will be discussed later on.
mount SLE-12-SP2-Server-DVD-x86_64-GM-DVD1.iso /mnt
rsync -avP /mnt/ /srv/tftpboot/suse-12.2/x86_64/install/
umount /mnt
Now, do the same procedure but this time with the Cloud image:
mount SUSE-OPENSTACK-CLOUD-7-x86_64-GM-DVD1.iso /mnt
rsync -avP /mnt/ /srv/tftpboot/suse-12.2/x86_64/repos/Cloud/
umount /mnt
Now, start the crowbar-init service. This will automate the crowbar installation for us:
systemctl start crowbar-init
Let’s then create the necessary database and configure Apache service in order to manage crowbar over the web interface:
crowbarctl database create
This might take some time. You can see the progression of what exactly crowbar is doing in the background by following the log file with the command:
tail -f /var/log/crowbar/crowbar_init.log
Once the process is finished, we can access the crowbar interface for the first time by opening a web browser with the address http://localhost. You then should be able to see something like this:
Now, click on “Start Installation” for crowbar to deploy chef, which is a automation tool to install and modify OpenStack over the web interface. You can see the progression of the installation with the command:
tail -f /var/log/crowbar/</nowiki>install.log
In the middle of the installation, you will be prompted to enter the username and password as you can see below. The default username and password are “crowbar” (without quotes). Hit the “ok” button to continue:
If the setup is finished with no errors, then we just have setup the admin node successfully. To make sure that everything went ok, you should be able to see the following messages on /var/log/crowbar/install.log:
Admin node deployed.
You can now visit the Crowbar web UI at:
http://192.168.124.10/
You should also now be able to PXE-boot a client. Please refer
to the documentation for the next steps.
Note that to run the crowbar CLI tool, you will need to log out
and log back in again for the correct environment variables to
be set up.
Your web interface should now look like this:
Controller and Compute node VM deployment
Now we need to create the controller node. In order to do that, select go to virt-manager and select the option to create a new virtual machine. In the menu, select options as follows:
Now, we should be able to start a new VM via PXE which was generated previously by crowbar. The image that its booting is called “sleshammer” and its stored in /srv/tftpboot/sleshammer directory. The sleshammer image is started by chef and will then install and configure a SLES2SP2 system so the controller can correctly be managed through crowbar.
The sleshammer started by chef will try to find crowbar on 192.168.124.10 as we set previously. If it finds crowbar successfully, it will then authenticate the installation node you are trying to perform in crowbar, then it assigns an IP address to it and will not proceed any further until you “allocate” the controller node through crowbar. In order to do that, go to your browser and check your crowbar installation once again with the address “http://localhost”. You will see a new node name present with a blinking yellow sign on it.
Click on that node, then click on “Edit”. Change the “Public Name” and “Alias” fields to whatever you like (don’t forget to add the “.<domainname>” after the Public Name you set, where <domainname> is the name you set previously on the admin installation). Now, set the “Intended Role” as “Controller”. Then click on “Save” and “Allocate”. Once you hit the “Allocate” button, the “ALLOCATED” parameter on the sleshammer image interface will change to “true” and the installation will proceed by rebooting the VM. If you get stuck with a “booting from hard disk” message after the reboot, make sure that the VM is configured to boot primarily over the network and reboot it once again. At this point, the sign in the crowbar web interface will be blinking green:
If you don’t get any errors, the installation will proceed and you will have the controller node successfully installed.
Now you can repeat the same procedure to create the compute node. In order to do that, repeat the same procedure described previously but this time, setting the “Intended Role” as “Compute”. Now, you should be able to see three nodes as illustrated below:
Openstack deployment
Now we can start installing OpenStack over barplamps. In the crowbar web interface, under “Barclamps” menu, select OpenStack.
For details about which Openstack module to install and how to configure them, please follow the instructions described in the official documentation on the link below:
You can see what is going on in the background during the installation by issuing:
tail -f /var/log/chef-client/*
Reference materials
I would like to thanks Rado Varga and Lumir Sliva for making this guide possible. You guys ROCK!
Official documentation: https://www.suse.com/documentation/suse-openstack-cloud-7/singlehtml/book_cloud_deploy/book_cloud_deploy.html
Comments
I failed to run “crowbarctl database create”, if you know why, please tell me,thanks.
It say : RestClient.head “http://127.0.0.1/api/database/new”, “Accept”=>”*/*”, “Accept-Encoding”=>”gzip, deflate”, “User-Agent”=>”rest-client/2.0.0 (linux-gnu x86_64) ruby/2.1.9p490”
# => 404 NotFound | text/html 0 bytes
RestClient.post “http://127.0.0.1/api/database/new”, “username&password”, “Accept”=>”application/vnd.crowbar.v2.0+json”, “Accept-Encoding”=>”gzip, deflate”, “Content-Length”=>”17”, “Content-Type”=>”application/x-www-form-urlencoded”, “User-Agent”=>”rest-client/2.0.0 (linux-gnu x86_64) ruby/2.1.9p490”
# => 422 UnprocessableEntity | application/json 911 bytes