Share with friends and colleagues on social media

At OPNFV we are building a modern CI/CD platform which would allow us to test whether a patch in any OpenStack or Kubernetes component breaks a NFV use case. To deploy the system under test (OpenStack or Kubernetes) we first require a number of provisioned nodes where the software of those popular platforms would be deployed. This is a requirement which most products supply through great tools (e.g. our SUSE Cloud) but the vanilla versions are lacking them.

There are many tools which solve the requirement we have, for example Cobbler. However, as we were very familiar with OpenStack, we decided to bet on a known OpenStack project: Bifrost. Moreover, we already knew a bit about Ironic, it was recommended to us and some of the core committers of the project were our friends, so it was an easy decision. Bifrost, according to its definition, is a set of Ansible playbooks that automate the task of deploying a base image onto a set of known hardware using ironic. If we had to define it mathematically: Bifrost = Ansible + Ironic. It must be noted that Bifrost is a complex and versatile tool which has multiple use cases and this blog post will only explain how we are using it in OPNFV to prepare our CI/CD. Therefore, this is not intended to give a full description of Bifrost capabilities and If you want to learn more, please check their documentation which is greaat!

In a nutshell, Bifrost functionality is divided into different Ansible roles. Each role has one specific task to do and roles must be executed in the correct order. To simplify the description of the roles, I categorized them into two different lists: roles to prepare the scene and roles that do the job. Let’s start with the first ones.

Roles to prepare the scene

Generating the required VMs

Why are we talking about VMs? This was supposed to be about baremetal, right? There are two use cases we cover in OPNFV: virtual and baremetal deployments. We normally use virtual deployments to reduce the amount of needed hardware to run our tests and these deployments take care of the first-level gates. In case the tests pass, we would move to baremetal for further testing. In any case, to isolate the jumphost from the host, both deployments use a VM, named the opnfv VM, which will steer the whole deployment, as we will see in a bit.

To generate the VMs or a single opnfv VM in case of baremetal, we use the bifrost-create-vm-nodes. This role is included in openstack/bifrost, however, we created a specific role based on bifrost’s and stored it in OPNFV because the one in OpenStack is only used for virtual deployments. Baremetal is very important for NFV as, for example, some use cases require high performance which is only achieved by hardware specific additions that accelerate the amount of transmitted packets. In order to test that, we need baremetal in OPNFV and thus we created this role, basically adding a way to read our OPNFV standardized pdf and idf files and act upon the config that those files provide. If you are curious about pdf and idf, they are yaml files that describe the hardware and how it should be configured, e.g. mapping between interfaces and networks, ip ranges, describing capabilities and characteristics of the hardware, etc. These links provide examples of pdf and idf.

If we go a bit deeper and try to understand what both roles do (the one in openstack/bifrost and the one in OPNFV):

  1. They install required packages to create VMs such as libvirt, qemu, or virtualbmc
  2. The required libvirt networks are created
  3. The image for the VMs is downloaded
  4. Storage volumes are prepared
  5. VMs are registered based on a xml template
  6. opnfv VM is started
  7. The rest of VMs are added to the virtualbmc tool
  8. Dumps a json with information regarding nodes for the next roles to consume

For the ones who are not familiar with virtualbmc, ironic has several drivers to communicate with the nodes like ipmi, redfish or ilo. In OPNFV we use ipmi which is probably the most popular driver and also the one we need for all the hardware we have in our labs. As we would like to use the same code for both deployments, we use virtualbmc to emulate a ipmi server for each VM and thus being able to control them through ipmi commands.

Regarding the libvirt networks, we use two, one for admin and one for management. Admin takes care of the pxe traffic and management is used for ansible to configure the nodes. Afterwards, in virtual deployments, this interface will carry the rest of traffic which is segregated using vlans. When doing baremetal deployments, this will depend on the pdf and idf configuration

 

Installing needed software

Most of the roles in this category are rather simple and they basically install required software in the opnfv vm. The first role is bifrost-prep-for-install which simply downloads bifrost. Then bifrost-keystone-install installs the required keystone components in order to run ironic. Why do we need keystone? Well, the reader must note that we will not install ironic as part of an OpenStack deployment but ironic stand-alone. To do so, ironic still requires some parts of keystone to be up and running and that’s what this role provides. Additionally, and to interact with those keystone parts, the role bifrost-keystone-client-config is executed. Finally, bifrost-ironic-install, as the name says, installs all the ironic components. Before continuing, let’s briefly describe a bit about ironic.

Ironic is an OpenStack project which provisions bare metal machines. Normally, it is used inside an OpenStack deployment and provides support to easily deploy physical nodes instead of VMs using nova as the main API. In our case, we use it stand-alone and the first thing it requires is a file where the physical hosts details are defined (e.g. how to interact with them). When using Ironic through Bifrost, the file describes the host using json or yaml and becomes what in Ansible is known as an inventory. This inventory contains critical information for Bifrost and will be passed to the next roles to make everything work. This inventory is the json file that was dumped at step 8 previously and here is an example:

{
"node1": {
  "uuid": "a8cb6624-0d9f-c882-affc-046ebb96ec01",
  "host_groups": [
    "nova",
    "neutron"
  ],
  "driver_info": {
    "power": {
      "ipmi_target_channel": "0",
      "ipmi_username": "ADMIN",
      "ipmi_address": "192.168.122.1",
      "ipmi_target_address": "0",
      "ipmi_password": "undefined",
      "ipmi_bridging": "single"
    }
  },
  "nics": [
    {
    "mac": "00:01:02:03:04:05"
    }.
    {
    "mac": "00:01:02:03:04:06"
    }
  ],
  "driver": "ipmi",
  "ipv4_address": "192.168.122.2",
  "properties": {
    "cpu_arch": "x86_64",
    "ram": "3072",
    "disk_size": "10",
    "cpus": "1"
  },
  "name": "node1"
  }
}


Let’s move to the next roles, to understand how Ironic uses that inventory

Roles to do the job

Ironic enrollment and inspect

Bifrost will pass this inventory to ironic which will use it to enroll the physical or virtual nodes via the role ironic-enroll-dynamic. The enrollment process of ironic registers the nodes in the database and then checks that it has connectivity to them. When the check is successful, it sets their provisioning state to “manageable”. After they reached that state, the following step is the inspection or introspection which is the process of getting hardware parameters through power management credentials (e.g. IPMI, redfish or ilo). This process is tackled by the role ironic-inspect-node which does the following::

  1. Configures a PXE boot server which will provide a small image to the nodes. Note that we passed the mac address of the node in the inventory
  2. Through IPMI, it powers on the nodes in PXE booting mode
  3. The nodes send DHCP requests and receive the image through PXE
  4. The image is installed in RAM with the IPA (Ironic-Python-Agent) which will provide the hardware information to the ironic server
  5. The ironic server validates the received data and moves the state of the nodes to “available”

In my lab, I have three nodes being provisioned by Bifrost. To check the process, you should understand how to use the command:

openstack baremetal –help

For example, during the inspection step, this is what “openstack baremetal node list” shows:

+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| e1369efa-5391-5035-8533-3a065c44a584 | controller00 | None | None | inspect wait | False |
| c1a56cb3-fcef-59d5-8105-7e6815154f70 | compute00 | None | None | inspect wait | False |
| 65daa0c6-dd30-5ae6-a246-92a786d774f3 | compute01 | None | None | inspect wait | False |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+

And when the inspection finalized correctly:

+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| e1369efa-5391-5035-8533-3a065c44a584 | controller00 | None | power off | available | False |
| c1a56cb3-fcef-59d5-8105-7e6815154f70 | compute00 | None | power off | available | False |
| 65daa0c6-dd30-5ae6-a246-92a786d774f3 | compute01 | None | power off | available | False |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+

Creating the dib image, config drive and deploying

Once the node is in available state, we can start the deployment with the role bifrost-deploy-nodes-dynamic. There are several ways to provision the image we want in a node. In OPNFV we use what is called the direct method. This is the process:

  1. The image to boot the nodes is downloaded or generated and placed in the HTTP server of ironic
  2. We request the IPA to flash the image into a target device. IPA can pick up the image from the HTTP server
  3. If needed, IPA could also write the config drive
  4. Finally, we request IPA to power off the node and through IPMI, the boot device is modified

To generate the image in step 1, we can use bifrost-create-dib-image. This uses the diskimage-builder to create a bootable disk image, for example, opensuse Leap 15. For step 3, there is also a bifrost role: bifrost-configdrives-dynamic. In my lab, this is what I see when IPA is flashing the image:

+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| e1369efa-5391-5035-8533-3a065c44a584 | controller00 | None | power on | deploying | False |
| c1a56cb3-fcef-59d5-8105-7e6815154f70 | compute00 | None | power on | deploying | False |
| 65daa0c6-dd30-5ae6-a246-92a786d774f3 | compute01 | None | power on | deploying | False |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+

And when the process is finished:

+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+
| e1369efa-5391-5035-8533-3a065c44a584 | controller00 | None | power on | active | False |
| c1a56cb3-fcef-59d5-8105-7e6815154f70 | compute00 | None | power on | active | False |
| 65daa0c6-dd30-5ae6-a246-92a786d774f3 | compute01 | None | power on | active | False |
+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+

As Bifrost job is done, nodes are active and ready for openstack or k8s deployers to start doing their work as part of our CI/CD platform


Share with friends and colleagues on social media

Category: Cloud and as a Service Solutions, IT Infrastructure Management, Technical Solutions
This entry was posted Monday, 3 December, 2018 at 1:55 pm
You can follow any responses to this entry via RSS.

Comments

  • […] post Provisioning baremetal nodes for NFV CI/CD appeared first on SUSE […]

  • Leave a Reply

    Your email address will not be published. Required fields are marked *