Integrating SDN controllers in Openstack-ansible | SUSE Communities

Integrating SDN controllers in Openstack-ansible


Openstack-ansible (OSA) is an awesome project in OpenStack which, as the name describes, deploys OpenStack automatically using Ansible. OSA provides the required roles and playbooks to get a working OpenStack environment and most of the OpenStack components are supported. There are a lot of things we could explain about OSA but this post will just try to illustrate how SDN controllers can be integrated with it. To do so, this post dives a bit into the code taking the SDN controller OpenDaylight (ODL) as an example to illustrate the integration. Warning! This is based on OSA Rocky code integrating with ODL Oxygen, things might change when moving to different versions of OSA or ODL. Besides, this was tried using opensuse Leap, which is one of the supported distros in openstack-ansible.

SDN controllers integrate into OpenStack as backends of Neutron [1], which is the component that takes care of the networks in OpenStack. Fortunately for us, Neutron has a plugable architecture, so connecting a SDN controller to it is pretty easy. Basically, we just need to fulfill 4 tasks:

1 – Download the SDN controller and configure it correctly

2 – Provision the controller and the compute with the required packages

3 – Set up the virtual switches appropriately so that the SDN controller manages them. Be aware that some SDN controllers do not use open source virtual switches such as OVS and require some modifications in OpenStack nova in order to connect the VM interfaces to it. This is not the case for ODL.

4 – Configure neutron properly in order to operate the SDN controller as its backend

Before starting to explain how we solve each step, let’s briefly describe what is an Ansible role so that everyone can follow.

On a high-level view, Ansible roles are a way to keep related content together which helps to structure the code and make it more reusable and understandable. All Ansible roles follow the same file structure and when executed, they load certain variables, tasks, etc which carry out some particular configuration. For example, the variables for the role are defined with a default value in either vars or defaults directories. The playbooks (what define the actions to do in Ansible) are stored in the tasks directory. When the role is called, the execution starts always with the playbook: tasks/main.yml.

Why you need to briefly know about Ansible roles? Because OSA defines a role for neutron with the name of os_neutron [2] and as explained, when being invoked, its tasks/main.yml is executed [3]. This post will explain parts of that main.yml playbook in order to describe how the integration is done. SDN controllers normally also define a role to deploy and configure them using Ansible. For ODL, that role exists and it is stored in the ODL repo [4]. Therefore, it is obvious that the first thing to do is making sure that OSA downloads the ODL role and that os_neutron invokes it.

Downloading SDN

The ODL role takes care of all the related tasks to download ODL and start it. To connect the role to OSA, first that role must be downloaded into our environment. OSA has a config file where all the required roles are listed [5]. We can see that ODL is there:

- name: opendaylight
  scm: git
  version: master

OSA git clones all projects listed there and places them in a particular directory that OSA knows contains Ansible roles.

Next step is making sure that os_neutron role calls the ODL role. If you go back to tasks/main.yml [3], you will see a line which triggers the following playbook:

- include_tasks: dependent_neutron_roles.yml

That playbook [6] contains several os_neutron related roles which get executed depending on the environment, i.e. the value of particular user variables which will be explained later. At this point, it is just important to know the value of the ‘neutron_plugin_type’ user variable, which specifies the neutron backend we will use. The name of the value follows a convention: ml2.X, where X is the name of the backend. In our case: ‘ml2.opendaylight’.

It can be observed that ODL role is included in ‘dependent_neutron_roles.yaml’ [6]:

- name: Include ODL role
  name: opendaylight
    install_method: "{{ opendaylight_install_method }}"
    extra_features: "{{ opendaylight_extra_features }}"
    nb_rest_port: "{{ opendaylight_port | default('8180') }}"
    - neutron_plugin_type == "ml2.opendaylight"
    - "'opendaylight' in group_names"

If the conditions after the “when” clause are fulfilled, the ODL role will be called and ODL will be downloaded, configured and started.

Provisioning controller and computes with the required packages

Now that our SDN controller is running, it is time to prepare everything to successfully configure the connection with neutron. The specific actions that must be carried out for the integration of each SDN controller are defined by playbooks stored in the directory tasks/providers under the os_neutron role [7]. The name of those playbooks is important and they must be X_config.yml, where X is the name of the SDN controller. The reason for this strict naming convention is how those playbooks are called. If we go back to tasks/main.yml [3], the task which triggers those playbooks is:

- include_tasks: "{{ item }}"
    - files:
    - "{{ neutron_plugin_type.split('.')[-1] }}_config.yml"
  skip: true
    - "providers/"

Remember the value of neutron_plugin_type for ODL was ml2.opendaylight, which means that the result of the expression “{{ neutron_plugin_type.split(‘.’)[-1] }}_config.yml” will be “opendaylight_config.yml” and thus the correct playbook will get executed [8]. If we analyze that playbook, it can be seen that it first installs required pip packages to do the integration which are listed in the variable neutron_optional_opendaylight_pip_packages. The most important one is networking-odl, which contains all the logic to integrate neutron and ODL.

Set up the virtual switches

Continuing analyzing the playbook “opendaylight_config.yml” [8], it can be seen that after the pip packages installation, there is an execution of the playbook called setup_ovs_opendaylight.yml [9], That playbook, which is also under the providers directory, configures the OVS switches correctly and connects them to ODL.

Configure neutron to use ODL as backend

At this point, we have the SDN controller running and the virtual switches ready and connected to the SDN controller. Moreover, all the required packages that support the integration neutron-ODL were installed. It is time to start neutron but it must be done with the correct configuration. As you might know, when neutron starts, it loads config files that define its configuration. Therefore, to set up ODL as the neutron backend, we will need to generate the appropriate config files. In our case, the ones that need to be adapted to connect ODL are ml2_conf.ini and neutron.conf.

Ansible handles all the configuration files generation through templates which contain jinja2 code. In other words, there is one template for ml2_conf.ini[10] and one for neutron.conf[11] and using jinja2 code, it is possible to modify the variables and values which appear in those config files. Note that the templates add the format .j2 to the name of the config file, e.g. neutron.conf.j2.

If we go back to the tasks/main playbook [3], after the opendaylight_config.yml, a playbook called neutron_post_install.yml [12] is triggered. That is the one which contains the template processing part with the name “Copy common neutron config“. That task executes a non built-in Ansible module named config_template which is developed internally in OpenStack [13] and apart from doing the typical template processing, it allows to override variables of a jinja document or even add new sections in the final document if the format of the config file is .ini. How the config_template is called in neutron_post_install.yml [12] is a bit cryptic because it uses a loop where the task:

  src: "{{ item.src }}"
  dest: "{{ item.dest }}"
  owner: "root"
  group: "{{|default(neutron_system_group_name) }}"
  mode: "0640"
  config_overrides: "{{ item.config_overrides }}"
  config_type: "{{ item.config_type }}"

is called for each item being listed after the “with_items” key word. Note that each item in the list is a different config file.

For this explanation, the important ones are neutron.conf.j2 and {{ neutron_plugins[neutron_plugin_type].plugin_ini }}.j2“, which is a confusing name for a file, right? In reality, it is not because when working with Ansible, everything inside “{{ }}” must be considered as a variable. To find out the value behind that variable, first the ‘neutron_plugins’ dictionary should be understood. That dictionary is defined in the os_neutron role [14] which contains important information for each possible neutron backend.
If you remember in our case, ‘neutron_plugin_type = ml2.opendaylight’, which means we are looking for the dictionary item: ‘neutron_plugins[ml2.opendaylight]’ and specifically, for an attribute with the name ‘plugin_ini’. Checking ‘neutron_plugins’ again [14] it is easy to find that the value of that variable is ‘plugins/ml2/ml2_conf.ini’ and if we add the j2 which is outside of the {{ }} symbols, we will get ‘plugins/ml2/ml2_conf.ini.j2’.

Let’s start with neutron.conf.j2 [11]. Here the only thing we will modify is:

service_plugins = {{ neutron_plugin_loaded_base | join(',') }}

To understand it, check the beginning of the file, where the following logic exists:

{% for plugin in neutron_plugin_base %}
{% if plugin != 'dns' %}
{% set _ = neutron_plugin_loaded_base.append(plugin) %}
{% endif %}
{% endfor %}

Which means that the variable “neutron_plugin_loaded_base” incorporates the plugins defined in the variable “neutron_plugin_base”. Up to this point, only one user variable appeared: ‘neutron_plugin_type’ and now a new one gets introduced: ‘neutron_plugin_base’. This one is a list and declares the neutron plugins which we will use in our deployment. These plugins are not mandatory and the basic integration of a SDN controller and neutron can be done without them. If nothing is specified here, all the advanced neutron services will be implemented using the default neutron mechanisms (e.g. L3 or SFC). The ODL community recommends using ODL for L3 and not relying on the neutron capabilities, that is why for our example we will establish:

- odl-router_v2

where odl-router_v2 is a plugin that forces to neutron to delegate all L3 actions to ODL.

Therefore, the resulted neutron.conf will have ‘service_plugins = odl-router_v2’

ml2_conf.ini[10] has three variables which we must adjust:

type_drivers = {{ neutron_plugins[neutron_plugin_type].drivers_type }}

which will take the value local,flat,vlan,gre,vxlan for opendaylight (remember [14])

mechanism_drivers = {{ neutron_ml2_mechanism_drivers }}

where neutron_ml2_mechanism_drivers gets the value ‘neutron_plugins[neutron_plugin_type].mechanisms’ and thus it becomes: ‘opendaylight_v2’ (check [15] to know why).

The last user variable is not part of ml2_conf.ini.j2 and gets its way into the config file through the override variable: ‘neutron_opendaylight_conf_ini_overrides’ (simiarly to neutron_post_install.yml [12]). As explained in the config_template module [13], the nested keys in the override variable will be used as section headers. That variable is the third user variable we will use and for ODL is defined as:

  username: "admin"
  password: "admin"
  port_binding_controller: "pseudo-agentdb-binding"
  url: "http://{{ internal_lb_vip_address }}:8180/controller/nb/v2/neutron"

and consequently the result of ml2_conf.ini has (in my case):


username = admin
password = admin
port_binding_controller = pseudo-agentdb-binding
url =

This defines details about the REST API connection of neutron towards ODL.

Providing user variables to OSA

As it was mentioned during the explanation, three user variables were defined: the string ‘neutron_plugin_type’, the list ‘neutron_plugin_base’ and the dictionary ‘neutron_opendaylight_conf_ini_overrides’. Those variables must be provided to OSA and the way to do it is very easy. OSA loads all variables listed in any yaml file which has a name starting with user_ and placed in the directory /etc/openstack_deployment. For example, here is a user_* file defined by the OPNFV project to deploy a scenario that contains neutron and ODL where the three variables can be seen [16]. Other user variables are explained in this official link from openstack [17].

Conclusion and further work

Understanding how the OSA + SDN integration is done is interesting and might be useful. However, users are now capable of deploying Openstack with a SDN controller using OSA without having to understand it or having to fight with the different configurations required to successfully integrate them. This is great news because both OpenStack and any SDN controller (e.g. ODL) are complicated software pieces. If you are a user, you just need to make sure that the appropriate user variables are in place before triggering the deployment. If anyway you want to understand how things were done, read the post again (and ask questions!) and remember the steps are: downloading the SDN controller and its required packages, provisioning controller and computes accordingly, setting up the virtual switches to be managed by the SDN controller and finally configuring neutron to use the SDN controller as its backend.

Currently, there are three supported SDN controllers in OSA: Nuage, Dragonflow and ODL and a fourth is on its way: Tungsten Fabric. We in SUSE are supporting that work because we love OSA and helping integrating SDN controllers into it is something which can benefit the ecosystem greatly!

If you have questions, I am mbuil in the IRC channel #openstack-ansible



















(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published.

No comments yet