How to combine OpenStack with Software Defined Networking | SUSE Communities

How to combine OpenStack with Software Defined Networking

Share
Share

Software Defined Networking or SDN has been around for a while and it is a very important piece in the architecture if you have advanced networking use cases which require complex network automation without increasing the cost dramatically. As you might know, OpenStack is today capable of providing basic network services through its networking component “neutron”, for example L2 connectivity. However, if you have sophisticated L3 tunneling requirements, Service Function Chaining (SFC) requirements or any other advanced network services requirements, neutron alone might not be enough. OpenDaylight (ODL) is the most popular and reliable SDN controller in the open source arena and this post is going to explain how to integrate ODL and OpenStack to run L3 and Service Function Chaining use cases.

Fortunately, integrating SDN controllers into OpenStack is very easy because of the way OpenStack architects its components. These components are basically abstraction layers capable of configuring different modules to implement the requested services. These modules are hooked into most of the OpenStack components through plug-ins. For example, ODL includes a neutron plug-in which contains the logic to translate OpenStack calls into ODL specific calls, so that the OpenStack admin or user does not need to know the details of the ODL API. These users/admins just need to know how to configure Neutron so that it uses that plug-in in the right way, and this is exactly what this post is about.

The following explanation was tested using OpenStack Ocata and ODL Nitrogen. However, I don’t expect many changes when moving to OpenStack Pike.

There is already a useful guide which describes how to download the latest ODL and integrate it with OpenStack in order to run the basic L2 services from ODL. You can find that guide in this link. Instead of repeating the steps, from now on, I will assume that you read it and then I’ll expand that guide so that you are able to run the L3 ODL services or SFC services, while still using the OpenStack commands. Before that, let’s go through a few needed things which extend the clean up section of the guide.

Clean up OpenStack and ODL

When changing the configuration of neutron or ODL, it is very important that you clean up everything following the “ensuring OpenStack network state is clean” section of the provided link before restarting. That guide is extremely helpful but it is missing a few things which will be useful for L3 and SFC services. First of all, after stopping the neutron-server, I would recommend stopping the dhcp server too:

systemctl stop neutron-dhcp-agent.service

And activating it before starting the neutron-server again.

Assuming you have ODL already running, if you change something in neutron or in the ODL config, you will need to reset ODL and unfortunately it is not as easy as running /karaf-0.7.0/bin/restart. You should do the following:

1 – Stop the process: karaf-0.7.0/bin/stop

2 – Remove the ODL directories: snapshots, journal and instances

3 – Start the process with the clean flag: karaf-0.7.0/bin/start clean

You should do those steps after stopping the neutron server and all OVS switches. Never do those steps before or the new ODL process will be contaminated by previous configurations (coming from either neutron-server or OVS). Besides, before stopping the OVS switches and removing its databases (/etc/openvswitch/conf.db), I recommend to execute:

ovs-vsctl show; ovs-vsctl list Open_vSwitch .

which provides the OVS configuration details. That way, when restarting OVS, it will be easy to reconfigure it correctly again. For example, you should save the the local_ip parameter because it is needed when executing the command:

ovs-vsctl set Open_vSwitch . other_config:local_ip=

Configure L3 ODL Services

The two most basic use cases for L3 are:

1 – A VM wants to access services which are outside of the L2 network (e.g. any service in the internet)

2 – A VM wants to offer a service to the outside world

Openstack can implement those services through an internally developed agent called neutron-l3-agent but, as explained, it can also rely on the L3 services of other back-ends like ODL. From OpenStack perspective, two objects have to be created to successfully run those two use cases:

  • Public or external network
  • Virtual router

A public or external network provides external connectivity, normally using the provider network. Remember that the provider network normally maps to a physical network within the datacenter through which VMs can access the internet.

Virtual routers implement a connection between the different broadcast domains or L2 networks, e.g. a public network and a private network. As you might know, VMs are normally connected to private networks. These virtual routers use network namespaces to carry out its service.

ODL offers L3 services and we can use it instead of the OpenStack neutron-l3-agent. Note that the ODL L3 service has more features than neutron-l3-agent and it is continuously adding more of them (e.g. BGPVPN). Therefore, if possible, ODL L3 service should be chosen instead of neutron-l3-agent.

The first thing to do, is to stop ODL, neutron services, OVS… (as explained in the guide) and before starting the services, we should disable the neutron-l3-agent:

systemctl stop neutron-l3-agent; systemctl disable neutron-l3-agent

Then, edit the /etc/neutron/neutron.conf, so that we point to ODL L3 service plugin. The variable service_plugins should have odl_router_v2 instead of router:

service_plugins = odl-router_v2,….

Edit /etc/neutron/plugins/ml2/ml2_conf.ini to use the v2 opendaylight mechanism driver, set the correct bridge_mapping, etc. Your ml2_conf.ini should be similar to this one (asuming you are using a flat provider network):

[ml2_type_vxlan]
vni_ranges = 1:1000
vxlan_group = 239.1.1.1

[ml2_type_vlan]
network_vlan_ranges = vlan:1:1

[ml2_type_flat]
flat_networks = {{ provider_network_name }}

[securitygroup]
enable_ipset = True
enable_security_group = True

[ml2]
type_drivers = local,flat,vlan,gre,vxlan
tenant_network_types = vxlan,flat,vlan
mechanism_drivers = opendaylight_v2
bridge_mappings = (( provider_network:interface }}
extension_drivers = port_security

[ml2_odl]
username = admin
url = http://{{odl_ip}}:8080/controller/nb/v2/neutron
password = admin
port_binding_controller = pseudo-agentdb-binding

Note that you should add the name of your provider network to the flat_networks variable and the mapping between your provider network and the interface which connects to that network. In my case, my provider network is called flat and the interface is eth12, so I have:

bridge_mappings = flat:eth12

Finally, remember to set the ip of opendaylight in the url variable

If you observe the config, there is a port_binding_controller called psuedo-agentdb-binding. The pseudo-agentdb-binding is a new mechanism to communicate OpenStack and ODL which is recommended over “network topology”. The reasons for this are out of the scope of this post and I recommend to read the networking-odl documentation to learn about it in this link. As a summary, neutron will request information to ODL about the nodes running OVS. To achieve that, when restarting things, a command is executed in each of those OVS nodes to send the information to ODL, as it will be explained in a moment.

Now that neutron is well configured, everything can be started: ODL, OVS (setting the manager and the local_ip), dhcp, and neutron. Don’t be so fast and start trying to create floating ips, etc because there are a couple of things to do before being able to use L3. First, we must indicate OVS what interface is used to contact the provider network. We already did this in ml2.conf.ini when specifying the value of bridge_mappings and we must repeat it here. This command should be executed in all nodes with OVS:

sudo ovs-vsctl set Open_vSwitch . other_config:provider_mappings=provider_network:interface

In my deployment, as explained, the name of my provider network is ‘flat’ and the interface in my nodes is always eth12, so I will execute in all the nodes with OVS:

sudo ovs-vsctl set Open_vSwitch . other_config:provider_mappings=flat:eth12

Secondly, we must push the node configuration to ODL so that neutron can fetch it using the pseudo-agentdb. The command to accomplish this is neutron-odl-ovs-hostconfig. We must provide again the mapping between provider_network and interface:

neutron-odl-ovs-hostconfig –datapath_type=system –bridge_mappings=provider_network:interface

Normally, that should be all, however, it is recommended to run an arping in the node with the interface that has the gateway ip. ODL is monitoring who replies to the gateway arp packets to learn the mac address of it and locate it:

arping -U gateway_ip -I interface

Note that the interface in the previous command refers to the one that allows to connect to the provider network, in my case eth12.

That’s it! Remember you should now create an external network and a subnet specifying your gateway-ip.

Configure SFC ODL Services

Service Function Chaining is a useful network mechanism which can be used to apply different services to different traffics. If you want to know more about it, I recommend you to read the post I did some months ago: link. Neutron has a subproject called networking-sfc which provides a common API to configure SFC services in OpenStack. The subsequent configuration uses networking-sfc together with ODL, so you must have networking-sfc installed in the same node where neutron server is installed. Read the instructions of the networking-sfc project for more information:

https://github.com/openstack/networking-sfc

As it was done in the L3 configure section, neutron config files will be modified and thus, the first thing to do is to stop ODL, neutron services, OVS, etc (as explained in the guide). Let’s start with the config changes that need to happen in ODL.

As explained in the linked integration guide, the configuration file “karaf-0.7.0/etc/org.apache.karaf.features.cfg” has odl-netvirt-openstack. We should replace it with odl-netvirt-sfc. Don’t worry! The sfc feature has a dependency with the OpenStack feature, so all the important functionality to run L2 or L3 is still being installed.

Then, modify the file “karaf-0.7.0/etc/opendaylight/datastore/initial/config/genius-itm-config.xml”, so that it looks like this:

<itm-config xmlns="urn:opendaylight:genius:itm:config">
<def-tz-enabled>false</def-tz-enabled>
<def-tz-tunnel-type>vxlan</def-tz-tunnel-type>
<tunnel-aggregation>
<tunnel-type>vxlan</tunnel-type>
<enabled>false</enabled>
</tunnel-aggregation>
<default-tunnel-tos>0</default-tunnel-tos>
<gpe-extension-enabled>true</gpe-extension-enabled>
</itm-config>

and the “karaf-0.7.0/etc/opendaylight/datastore/initial/config/netvirt-elanmanager-config.xml”:

<elanmanager-config xmlns="urn:opendaylight:netvirt:elan:config">
<auto-create-bridge>true</auto-create-bridge>
<int-bridge-gen-mac>true</int-bridge-gen-mac>
<temp-smac-learn-timeout>10</temp-smac-learn-timeout>
<punt-lldp-to-controller>false</punt-lldp-to-controller>
<!--
<controller-max-backoff>5000</controller-max-backoff>
<controller-inactivity-probe>5000</controller-inactivity-probe>
-->
<auto-config-transport-zones>true</auto-config-transport-zones>
<use-of-tunnels>true</use-of-tunnels>
<openstack-vni-semantics-enforced>true</openstack-vni-semantics-enforced>
</elanmanager-config>

Note that if this is the first time you are running ODL, these files will not exist but you can create them and they will be read by ODL when started. ODL is now ready but before running it, we must edit the neutron config files to hook networking-sfc and ODL properly.

The networking-sfc plug-ins must be added to the service_plugins variable in /etc/neutron.conf, as it was done in L3 with odl-router_v2:

service_plugins = networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin,…

Moreover, in the bottom of neutron.conf, the following must be inserted:

# ODL-SFC
[sfc]
drivers = odl

[flowclassifier]
drivers = odl

Moving on to /etc/neutron/plugins/ml2/ml2_conf.ini, it must look exactly as it was described for L3 configuration. If you are not running L3 and SFC together, the bridge_mappings variable is not needed.

That’s it, you have neutron correctly configured too!

The last thing which must be done is providing NSH support to OVS. OVS 2.6 does not have NSH support and OVS has to be built with a patch in order to get the support. It is quite easy to do it and it is very well explained here:

https://github.com/yyang13/ovs_nsh_patches

NSH comes natively in OVS 2.8 but that version is still not working with ODL Nitrogen and we will need to wait until ODL Oxygen which is the next version coming out in February 2018.

Once OVS is built, you must replace the old non-NSH OVSs and then SFC config is ready. The last step is starting all stopped services again.

If you feel this is very complicated, you can pass by the OPNFV community and use one of our installers to run the SFC or L3 use cases, or any other! This configuration is already automated there and running daily in the OPNFV CI. For example, have a look at the xci tool which deploys it automatically: http://docs.opnfv.org/en/latest/submodules/releng-xci/docs/xci-user-guide.html#xci-user-guide

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet

Avatar photo
11,871 views