OpenDaylight Carbon pipeline description | SUSE Communities

OpenDaylight Carbon pipeline description

Share
Share

I am happy to announce that we have a new OpenDaylight release! To celebrate that and to help people understand it, I decided to create a series of 2 posts where I will explain how the openflow tables are organized and show examples of two services: L2 connectivity with a ping packet (icmp) and Service Function Chaining. Let me start with a brief introduction to OpenDaylight.

OpenDaylight, widely known as ODL, is an open source SDN (Software Defined Networking) controller hosted by The Linux Foundation. The first release was in February 2014 (Hydrogen release) and the last release is out since May 2017 (Carbon release). ODL supports a wide variety of southbound protocols which are capable to configure network forwarders from a central point. The most popular and widely used protocol by ODL is OpenFlow [1], which is used by other communities such as OpenStack and OPNFV as the default southbound protocol to configure switches like OVS. ODL is a very young solution which is constantly changing and thus components might be very different from one release to the other. Consequently, the way to configure it is very likely to change completely from one release to the other. As a matter of fact, the previous release (Boron release) and the new release (Carbon release) have huge differences due to the introduction of a couple of new components in the architecture which take control of common services like interface handling, openflow table management, etc. Due to those differences, I must warn the reader that this post describes the openflow pipelines of ODL in the Carbon release, which might differ with other releases. By the way, for the ones who are not familiar with ODL jargon, a pipeline is the group of openflow tables and entries that a network service (e.g. L3 service, SFC service…) specifies and which are traversed by a packet which must be processed by that service.

Let’s start talking about a simple example: the ping between two VMs. First, those VMs will be connected to the same OVS. Then, we will change the environment a bit and those two VMs will be connected to two different OVS. However, before going into the examples, it might be important to show some interesting OVS commands which allow us to see the switch interfaces and dump the openflow tables and flows that ODL programs.

The first important command I would like to introduce is:

ovs-vsctl show

The output will be something like this:

4a0aa2fb-08b5-4844-859e-4e8b47fe7c55
Manager "tcp:172.28.128.1:6640"
is_connected: true
  Bridge br-int
  Controller "tcp:172.28.128.1:6653"
  is_connected: true
  fail_mode: secure
    Port br-int
     Interface br-int
     type: internal
    Port "tund286da401a0"
     Interface "tund286da401a0"
     type: vxlan
     options: {key=flow, local_ip="172.19.0.2", remote_ip="172.19.0.4"}
    Port "tap711db4bc-56"
     Interface "tap711db4bc-56"
ovs_version: "2.6.1"

This command shows the different bridges and the ports attached to them. Note that OVS operations related to bridges, such as create or delete ports, are controlled by ODL via the OVSDB southbound protocol and not openflow. The first line shows the ovsdb id of the switch and the following two lines:

Manager "tcp:172.28.128.1:6640"
is_connected: true

Specify that the OVS switch is connected to a OVSDB controller through tcp:172.28.128.1:6640

After that, the bridge description starts. Similar to what was just explained, the brige br-int is connected to an openflow controller through tcp:172.28.128.1:6653. This bridge has three ports: br-int, tund286da401a0 and tap711db4bc-56. All OVS bridges create an port with the same name as the bridge (in this case br-int), tund286da401a0 is a vxlan port with VTEP 127.19.0.2 and tap711db4bc-56 is a regular tap port which connects to a VM.

To identify the interfaces in openflow, we need to know the id that each of these ports have. To do so, we need to use another command:

ovs-ovsctl -O Openflow13 show br-int

The output would be something like this:

OFPT_FEATURES_REPLY (OF1.3) (xid=0x2): dpid:000038a5d662d2be
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS
OFPST_PORT_DESC reply (OF1.3) (xid=0x3):

1(tap711db4bc-56): addr:86:36:d6:2e:a3:ab
config: 0
state: 0
current: 10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max

4(tund286da401a0): addr:52:bd:eb:80:93:78
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max

LOCAL(br-int): addr:38:a5:d6:62:d2:be
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max

OFPT_GET_CONFIG_REPLY (OF1.3) (xid=0x5): frags=normal miss_send_len=0
```

From the previous information, one important field is the dpid which is the id that ODL will use to idenfiy the bridge: 000038a5d662d2be. Then, we can observe that the port tap711db4bc-56 is identified in openflow with number one and the vxlan port tund286da401a0 is identified with number 4. The internal port is identified with LOCAL.

Finally, the command to dump all the openflow tables is:

ovs-ofctl -O Openflow13 dump-flows br-int

The output of this command is normally very long and we will see one when studying the ping pipelines. Therefore I will just show one line as example:

cookie=0x8051388, duration=11721.238s, table=50, n_packets=3, n_bytes=230, priority=20,metadata=0x11388000000/0xfffffffff000000,dl_src=ca:13:6d:dc:cc:fc actions=goto_table:51
  • cookie: is a parameter that identifies this specific flow
  • duration: the time that has passed since the flow was created
  • table: identifies the table number where this flow is created. There are normally several flows per table
  • n_packets, n_bytes: statistics about the packets which hit this flow
  • priority: the priority that this flow has in the table. The packets reaching this table will always be evaluated by the flow with the highest priority

The rest of parameters are either matches or actions. An openflow flow is divided between those two. The match describes what parameters must the packet comply so that it hits that flow. The actions express how the packets which match are processed. The example above matches on a metadata value and a source mac address. The action is to move the packet to table 51. Note that the matches normally specify fields in the network header (e.g. tcp, ip…) but could also match on internal openflow data structures which have no effect on the packet headers, like metadata or registers.

Now that everyone is on the same page regarding the useful OVS commands, I can start describing the Carbon pipelines which are followed by a ping packet. The following links shows a summary of the different pipelines in ODL Carbon and what tables each of them is using [2]. Be careful because the diagram is a little bit outdated but most of the information is correct. As you could observe, table 17 and 220 are very important because the first one is the one deciding the service which the packet must consume and the latter specifies what the packet should do after being processed by a service.

Let’s start with the pipeline description. Imagine having just one OVS switch controlled by ODL and three VMs connected to that OVS switch through a tap interface (i.e. we have three tap interfaces connected to the bridge). The ips of the VMs are 10.0.0.1, 10.0.0.2 and 10.0.0.3. These are the flows that appear when dumping the flows (I removed the cookie and duration parameters and all the flows which have nothing to do with what I want to explain):

table=0, n_packets=11, n_bytes=866, priority=4,in_port=1,vlan_tci=0x0000/0x1fff actions=write_metadata:0x10000000000/0xffffff0000000001,goto_table:17
table=0, n_packets=20, n_bytes=1608, priority=4,in_port=2,vlan_tci=0x0000/0x1fff actions=write_metadata:0x20000000000/0xffffff0000000001,goto_table:17
table=0, n_packets=12, n_bytes=964, priority=4,in_port=3,vlan_tci=0x0000/0x1fff actions=write_metadata:0x30000000000/0xffffff0000000001,goto_table:17
table=17, n_packets=11, n_bytes=866, priority=10,metadata=0x10000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000010000030d40/0xfffffffffffffffe,goto_table:19
table=17, n_packets=9, n_bytes=706, priority=10,metadata=0x9000010000000000/0xffffff0000000000 actions=load:0x1->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000011388000000/0xfffffffffffffffe,goto_table:48
table=17, n_packets=12, n_bytes=964, priority=10,metadata=0x9000030000000000/0xffffff0000000000 actions=load:0x3->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000031388000000/0xfffffffffffffffe,goto_table:48
table=17, n_packets=12, n_bytes=964, priority=10,metadata=0x30000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000030000030d40/0xfffffffffffffffe,goto_table:19
table=17, n_packets=19, n_bytes=1518, priority=10,metadata=0x20000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000020000030d40/0xfffffffffffffffe,goto_table:19
table=17, n_packets=18, n_bytes=1440, priority=10,metadata=0x9000020000000000/0xffffff0000000000 actions=load:0x2->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000021388000000/0xfffffffffffffffe,goto_table:48
table=19, n_packets=4, n_bytes=168, priority=100,arp,arp_op=1 actions=group:5001
table=19, n_packets=4, n_bytes=168, priority=100,arp,arp_op=2 actions=CONTROLLER:65535,resubmit(,17)
table=19, n_packets=34, n_bytes=3012, priority=0 actions=resubmit(,17)
table=48, n_packets=39, n_bytes=3110, priority=0 actions=resubmit(,49),resubmit(,50)
table=50, n_packets=9, n_bytes=706, priority=20,metadata=0x11388000000/0xfffffffff000000,dl_src=f2:f3:42:5b:23:d9 actions=goto_table:51
table=50, n_packets=18, n_bytes=1440, priority=20,metadata=0x21388000000/0xfffffffff000000,dl_src=4e:b1:2d:e6:a1:0e actions=goto_table:51
table=50, n_packets=12, n_bytes=964, priority=20,metadata=0x31388000000/0xfffffffff000000,dl_src=3a:41:ae:0e:c7:ea actions=goto_table:51
table=51, n_packets=5, n_bytes=434, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=f2:f3:42:5b:23:d9 actions=load:0x100->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=13, n_bytes=1050, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=4e:b1:2d:e6:a1:0e actions=load:0x200->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=6, n_bytes=532, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=3a:41:ae:0e:c7:ea actions=load:0x300->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=15, n_bytes=1094, priority=0 actions=goto_table:52
table=52, n_packets=15, n_bytes=1094, priority=5,metadata=0x1388000000/0xffff000001 actions=write_actions(group:210000)
table=55, n_packets=3, n_bytes=230, priority=10,tun_id=0x1,metadata=0x10000000000/0xfffff0000000000 actions=drop
table=55, n_packets=7, n_bytes=474, priority=10,tun_id=0x2,metadata=0x20000000000/0xfffff0000000000 actions=drop
table=55, n_packets=5, n_bytes=390, priority=10,tun_id=0x3,metadata=0x30000000000/0xfffff0000000000 actions=drop
table=55, n_packets=12, n_bytes=864, priority=9,tun_id=0x1 actions=load:0x100->NXM_NX_REG6[],resubmit(,220)
table=55, n_packets=8, n_bytes=620, priority=9,tun_id=0x2 actions=load:0x200->NXM_NX_REG6[],resubmit(,220)
table=55, n_packets=10, n_bytes=704, priority=9,tun_id=0x3 actions=load:0x300->NXM_NX_REG6[],resubmit(,220)
table=81, n_packets=4, n_bytes=168, priority=0 actions=drop
table=220, n_packets=21, n_bytes=1670, priority=9,reg6=0x200 actions=output:2
table=220, n_packets=17, n_bytes=1298, priority=9,reg6=0x100 actions=output:1
table=220, n_packets=16, n_bytes=1236, priority=9,reg6=0x300 actions=output:3

Starting with table=0, this is always the first evaluated table. We see that there are three flows which are very similar. The first difference we see is in the matching parameters, which have a different number after the variable in_port=. This variable specifies the openflow port through which the packet arrived to the bridge. Let’s imagine that the VM connected to in_port=2 pings the VM in in_port=3. That means the packet will hit the flow

table=0, n_packets=20, n_bytes=1608, priority=4,in_port=2,vlan_tci=0x0000/0x1fff actions=write_metadata:0x20000000000/0xffffff0000000001,goto_table:17

Note that in the actions, the metadata is set. The metadata is a 64 bits register which is used  to define the service registered in a specific interface, keep track of the ingress port and other internal information for the service. The first 4 bits indicate service-id (e.g. ELAN, SCF, L3, ACL etc), next 20 bits indicate ingress logical port tag, and the remaining bits are used for carrying service specific information. To know the mapping between values and services, the following file should be checked:

https://git.opendaylight.org/gerrit/gitweb?p=genius.git;a=blob;f=mdsalutil/mdsalutil-api/src/main/java/org/opendaylight/genius/mdsalutil/NwConstants.java;h=9b2a2a94283684c8e56a9a53df32b48d878dd83a;hb=refs/heads/stable/carbon

The defined metadata has a value: 0x20000000000 and a mask 0xffffff0000000001. As you might have guessed, the mask specifies what bits of the metadata are written. The value does not have to contain the 64 bits and if it does not, 0 can be added to the left side, i.e. 0x20000000000 ==> 0x0000020000000000. That means we are setting service_id=0 and ingress port to 2. The action is to move to table 17, which is the dispatcher table.

The packets in table=17 are matched depending on the metadata. Our ping packet will hit the flow:

table=17, n_packets=19, n_bytes=1518, priority=10,metadata=0x20000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000020000030d40/0xfffffffffffffffe,goto_table:19

Note that we are defining a service_id 9 which is the L3VPN (L2 service). As there is no router defined in the deployment, table 19 is pretty empty and thus the only check we can see in table 19 is if it is an arp packet. As our packet is an icmp packet, it matches the default flow in table 19 and comes back to table 17, now with a different metadata value:

table=19, n_packets=34, n_bytes=3012, priority=0 actions=resubmit(,17)

Based on the metadata, the flow hit in table=17 is this one:

table=17, n_packets=18, n_bytes=1440, priority=10,metadata=0x9000020000000000/0xffffff0000000000 actions=load:0x2->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000021388000000/0xfffffffffffffffe,goto_table:48

You can observe that it loads a value in register1 and register7. Moreover, it modifies the metadata adding service_id=a which maps to ELAN (L2 service). Then, it moves the packet to table=48 which is where the L2 pipeline starts.

Table=48 does not do much. It resubmits to table 49 and 50. Table 49 provides an advanced way to learn source macs which does not apply in this scenario and that is why it does not exist.

table=48, n_packets=39, n_bytes=3110, priority=0 actions=resubmit(,49),resubmit(,50)

Now we are in table=50 where we match on the metadata and the source mac:

table=50, n_packets=18, n_bytes=1440, priority=20,metadata=0x21388000000/0xfffffffff000000,dl_src=4e:b1:2d:e6:a1:0e actions=goto_table:51

Then, in table 51 we match mainly on the destination mac address because the metadata is exactly the same (it is referring only to the extra information):

table=51, n_packets=6, n_bytes=532, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=3a:41: ae:0e:c7:ea actions=load:0x300->NXM_NX_REG6[],resubmit(,220)

REG6 is set and it is an interesting value because it is pointing to the egress port which should be used when egressing OVS. Then the packet is sent to table 220 which, as explained before, is the egress dispatcher table, i.e. the table which prepares the packet to egress the pipeline which most of the time means egressing the switch too.

This example has a very simple table=220 and you can very quickly guess the flow it will hit. Exactly, it is this one:

table=220, n_packets=16, n_bytes=1236, priority=9,reg6=0x300 actions=output:3

Observe that it matched on the value of reg6 which was set in the previous table. Then it egresses on openflow interface 3, which is where our destination VM sits. Now that we learnt the basics, we can complicate this scenario a bit.

VMs on different computes

Imagine a topology with three VMs and three computes. Each VM sits in a different compute and all computes are connected through vxlan tunnels. When exploring the flows of this topology on your own, don’t get scared by the amount of flows. The reality is that the interesting flows for us are very similar to what we just saw with the difference that the packet must be sent through the vxlan tunnel. That is why, I will describe it without so many details as before. Let’s see the flows of the compute where the client sits. Note that I am just showing the flows which are interesting and removing the cookie and the duration:

OFPST_FLOW reply (OF1.3) (xid=0x2):
table=0, n_packets=0, n_bytes=0, priority=5,tun_src=172.19.0.2,in_port=1 actions=write_metadata:0x20000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=282, n_bytes=20551, priority=5,in_port=3 actions=write_metadata:0x40000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=299, n_bytes=22345, priority=5,in_port=4 actions=write_metadata:0xc0000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=29, n_bytes=2610, priority=4,in_port=2,vlan_tci=0x0000/0x1fff actions=write_metadata:0x30000000000/0xffffff0000000001,goto_table:17
table=17, n_packets=28, n_bytes=2532, priority=10,metadata=0x9000030000000000/0xffffff0000000000 actions=load:0x3->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000031388000000/0xfffffffffffffffe,goto_table:48
table=17, n_packets=29, n_bytes=2610, priority=10,metadata=0x30000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000030000030d40/0xfffffffffffffffe,goto_table:19
table=17, n_packets=0, n_bytes=0, priority=0,metadata=0x8000000000000000/0xf000000000000000 actions=write_metadata:0x9000000000000000/0xf000000000000000,goto_table:80
table=18, n_packets=0, n_bytes=0, priority=0 actions=goto_table:38
table=19, n_packets=1, n_bytes=42, priority=100,arp,arp_op=2 actions=CONTROLLER:65535,resubmit(,17)
table=19, n_packets=1, n_bytes=42, priority=100,arp,arp_op=1 actions=group:5001
table=19, n_packets=27, n_bytes=2526, priority=0 actions=resubmit(,17)
table=20, n_packets=0, n_bytes=0, priority=0 actions=goto_table:80
table=21, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d40/0xfffffe,nw_dst=10.0.0.2 actions=group:150001
table=21, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d40/0xfffffe,nw_dst=10.0.0.1 actions=set_field:0x1->tun_id,set_field:66:f2:84:79:86:c3->eth_dst,load:0x400->NXM_NX_REG6[],resubmit(,220)
table=21, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d40/0xfffffe,nw_dst=10.0.0.3 actions=set_field:0x1->tun_id,set_field:9e:de:65:1a:19:f5->eth_dst,load:0xc00->NXM_NX_REG6[],resubmit(,220)
table=22, n_packets=0, n_bytes=0, priority=0 actions=CONTROLLER:65535
table=23, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=36, n_packets=29, n_bytes=2602, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51
table=38, n_packets=0, n_bytes=0, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51
table=45, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=48, n_packets=28, n_bytes=2532, priority=0 actions=resubmit(,49),resubmit(,50)
table=50, n_packets=28, n_bytes=2532, priority=20,metadata=0x31388000000/0xfffffffff000000,dl_src=52:ee:a6:d2:b1:a9 actions=goto_table:51
table=50, n_packets=0, n_bytes=0, priority=10,reg4=0x1 actions=goto_table:51
table=50, n_packets=0, n_bytes=0, priority=0 actions=CONTROLLER:65535,learn(table=49,hard_timeout=10,priority=0,cookie=0x8600000,NXM_OF_ETH_SRC[],NXM_NX_REG1[0..19],load:0x1->NXM_NX_REG4[0..7]),goto_table:51
table=51, n_packets=23, n_bytes=2142, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=52:ee:a6:d2:b1:a9 actions=load:0x300->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=0, n_bytes=0, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=66:f2:84:79:86:c3 actions=set_field:0x1->tun_id,load:0x400->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=22, n_bytes=2100, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=9e:de:65:1a:19:f5 actions=set_field:0x1->tun_id,load:0xc00->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=0, n_bytes=0, priority=15,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 actions=drop
table=51, n_packets=12, n_bytes=892, priority=0 actions=goto_table:52
table=52, n_packets=6, n_bytes=432, priority=5,metadata=0x1388000000/0xffff000001 actions=write_actions(group:210000)
table=52, n_packets=6, n_bytes=460, priority=5,metadata=0x1388000001/0xffff000001 actions=write_actions(group:209999)
table=55, n_packets=6, n_bytes=432, priority=10,tun_id=0x3,metadata=0x30000000000/0xfffff0000000000 actions=drop
table=55, n_packets=6, n_bytes=460, priority=9,tun_id=0x3 actions=load:0x300->NXM_NX_REG6[],resubmit(,220)
table=60, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=80, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=81, n_packets=0, n_bytes=0, priority=100,arp,metadata=0x9000030000030d40/0xffffff0000fffffe,arp_tpa=10.0.0.254,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:52:54:05:49:a8:05->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0x52540549a805->NXM_NX_ARP_SHA[],load:0xa0000fe->NXM_OF_ARP_SPA[],load:0->NXM_OF_IN_PORT[],load:0x300->NXM_NX_REG6[],resubmit(,220)
table=81, n_packets=1, n_bytes=42, priority=0 actions=drop
table=90, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=211, n_packets=0, n_bytes=0, priority=0 actions=drop
table=212, n_packets=0, n_bytes=0, priority=0 actions=goto_table:213
table=213, n_packets=0, n_bytes=0, priority=62020,ct_state=-new+est-rel-inv+trk actions=resubmit(,17)
table=213, n_packets=0, n_bytes=0, priority=62020,ct_state=-new-est+rel-inv+trk actions=resubmit(,17)
table=213, n_packets=0, n_bytes=0, priority=0 actions=drop
table=220, n_packets=29, n_bytes=2602, priority=9,reg6=0x300 actions=output:2
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x200 actions=load:0xac130002->NXM_NX_TUN_IPV4_DST[],output:1
table=220, n_packets=6, n_bytes=432, priority=9,reg6=0x90000400 actions=output:3
table=220, n_packets=6, n_bytes=432, priority=6,reg6=0x400 actions=load:0x90000400->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=220, n_packets=24, n_bytes=2212, priority=6,reg6=0xc00 actions=load:0x90000c00->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=220, n_packets=24, n_bytes=2212, priority=9,reg6=0x90000c00 actions=output:4
table=230, n_packets=30, n_bytes=2644, priority=0 actions=resubmit(,220)
table=231, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,220)
table=241, n_packets=0, n_bytes=0, priority=0 actions=drop
table=242, n_packets=0, n_bytes=0, priority=0 actions=goto_table:243
table=243, n_packets=0, n_bytes=0, priority=62020,ct_state=-new+est-rel-inv+trk actions=resubmit(,220)
table=243, n_packets=0, n_bytes=0, priority=62020,ct_state=-new-est+rel-inv+trk actions=resubmit(,220)
table=243, n_packets=0, n_bytes=0, priority=0 actions=drop

The VM is connected through in_port=2, that is why the ping packets are ingressing hitting this flow:

table=0, n_packets=29, n_bytes=2610, priority=4,in_port=2,vlan_tci=0x0000/0x1fff actions=write_metadata:0x30000000000/0xffffff0000000001,goto_table:17

Notice that the logical port id in the metadata does not have to be exactly the in_port number. After table=0, as explained in the previous example, it hits table=17, table=19 and comes back to table=17:

table=17, n_packets=29, n_bytes=2610, priority=10,metadata=0x30000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000030000030d40/0xfffffffffffffffe,goto_table:19

table=19, n_packets=27, n_bytes=2526, priority=0 actions=resubmit(,17)

table=17, n_packets=28, n_bytes=2532, priority=10,metadata=0x9000030000000000/0xffffff0000000000 actions=load:0x3->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000031388000000/0xfffffffffffffffe,goto_table:48

As before, it goes to table=48, table=50 and finally table=51 which sends it to table=220:

table=48, n_packets=28, n_bytes=2532, priority=0 actions=resubmit(,49),resubmit(,50)

table=50, n_packets=28, n_bytes=2532, priority=20,metadata=0x31388000000/0xfffffffff000000,dl_src=52: ee:a6:d2:b1:a9 actions=goto_table:51

table=51, n_packets=22, n_bytes=2100, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=9e: de:65:1a:19:f5 actions=set_field:0x1->tun_id,load:0xc00->NXM_NX_REG6[],resubmit(,220)

Here comes the new thing. It will hit the following flow matching on REG6:

table=220, n_packets=24, n_bytes=2212, priority=6,reg6=0xc00 actions=load:0x90000c00->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230

Note that the value in REG6 changed. This register stores the service id in the first 4 bits and the next 20 bits contain the egress logical port tag. In this case, the port is c and the service is 9, which refers to L3 service. Table 230 is reserved for policies but as we haven’t defined anything in that area, it just sends the packet back to table=220:

table=230, n_packets=30, n_bytes=2644, priority=0 actions=resubmit(,220)

Finally, based on reg6, the packet matches the flow:

table=220, n_packets=24, n_bytes=2212, priority=9,reg6=0x90000c00 actions=output:4

Note that output 4 is the vxlan port that connects this compute with the compute where the server is running. Let’s look at the flows in that server-compute:

OFPST_FLOW reply (OF1.3) (xid=0x2):
table=0, n_packets=0, n_bytes=0, priority=5,tun_src=172.19.0.2,in_port=1 actions=write_metadata:0x90000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=0, n_bytes=0, priority=5,tun_src=172.19.0.3,in_port=1 actions=write_metadata:0xa0000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=1569, n_bytes=114129, priority=5,in_port=3 actions=write_metadata:0xe0000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=1603, n_bytes=117471, priority=5,in_port=4 actions=write_metadata:0xf0000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=30, n_bytes=2700, priority=4,in_port=2,vlan_tci=0x0000/0x1fff actions=write_metadata:0xb0000000000/0xffffff0000000001,goto_table:17
table=17, n_packets=28, n_bytes=2532, priority=10,metadata=0x90000b0000000000/0xffffff0000000000 actions=load:0xb->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa0000b1388000000/0xfffffffffffffffe,goto_table:48
table=17, n_packets=30, n_bytes=2700, priority=10,metadata=0xb0000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x90000b0000030d40/0xfffffffffffffffe,goto_table:19
table=17, n_packets=0, n_bytes=0, priority=0,metadata=0x8000000000000000/0xf000000000000000 actions=write_metadata:0x9000000000000000/0xf000000000000000,goto_table:80
table=18, n_packets=0, n_bytes=0, priority=0 actions=goto_table:38
table=19, n_packets=1, n_bytes=42, priority=100,arp,arp_op=1 actions=group:5001
table=19, n_packets=1, n_bytes=42, priority=100,arp,arp_op=2 actions=CONTROLLER:65535,resubmit(,17)
table=19, n_packets=0, n_bytes=0, priority=20,metadata=0x30d40/0xfffffe,dl_dst=52:54:05:49:a8:05 actions=goto_table:21
table=19, n_packets=28, n_bytes=2616, priority=0 actions=resubmit(,17)
table=20, n_packets=0, n_bytes=0, priority=0 actions=goto_table:80
table=21, n_packets=0, n_bytes=0, priority=42,icmp,metadata=0x30d40/0xfffffe,nw_dst=10.0.0.254,icmp_type=8,icmp_code=0 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:52:54:05:49:a8:05->eth_src,move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],set_field:10.0.0.254->ip_src,set_field:0->icmp_type,load:0->NXM_OF_IN_PORT[],resubmit(,21)
table=21, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d40/0xfffffe,nw_dst=10.0.0.3 actions=group:150002
table=21, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d40/0xfffffe,nw_dst=10.0.0.2 actions=set_field:0x1->tun_id,set_field:52:ee:a6:d2:b1:a9->eth_dst,load:0xf00->NXM_NX_REG6[],resubmit(,220)
table=21, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d40/0xfffffe,nw_dst=10.0.0.1 actions=set_field:0x1->tun_id,set_field:66:f2:84:79:86:c3->eth_dst,load:0xe00->NXM_NX_REG6[],resubmit(,220)
table=22, n_packets=0, n_bytes=0, priority=0 actions=CONTROLLER:65535
table=23, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=36, n_packets=24, n_bytes=2212, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51
table=38, n_packets=0, n_bytes=0, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51
table=45, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=48, n_packets=28, n_bytes=2532, priority=0 actions=resubmit(,49),resubmit(,50)
table=50, n_packets=28, n_bytes=2532, priority=20,metadata=0xb1388000000/0xfffffffff000000,dl_src=9e:de:65:1a:19:f5 actions=goto_table:51
table=50, n_packets=0, n_bytes=0, priority=10,reg4=0x1 actions=goto_table:51
table=50, n_packets=0, n_bytes=0, priority=0 actions=CONTROLLER:65535,learn(table=49,hard_timeout=10,priority=0,cookie=0x8600000,NXM_OF_ETH_SRC[],NXM_NX_REG1[0..19],load:0x1->NXM_NX_REG4[0..7]),goto_table:51
table=51, n_packets=22, n_bytes=2100, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=9e:de:65:1a:19:f5 actions=load:0xb00->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=23, n_bytes=2142, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=52:ee:a6:d2:b1:a9 actions=set_field:0x1->tun_id,load:0xf00->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=0, n_bytes=0, priority=20,metadata=0x1388000000/0xffff000000,dl_dst=66:f2:84:79:86:c3 actions=set_field:0x1->tun_id,load:0xe00->NXM_NX_REG6[],resubmit(,220)
table=51, n_packets=0, n_bytes=0, priority=15,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 actions=drop
table=51, n_packets=7, n_bytes=502, priority=0 actions=goto_table:52
table=52, n_packets=5, n_bytes=390, priority=5,metadata=0x1388000000/0xffff000001 actions=write_actions(group:210000)
table=52, n_packets=2, n_bytes=112, priority=5,metadata=0x1388000001/0xffff000001 actions=write_actions(group:209999)
table=55, n_packets=5, n_bytes=390, priority=10,tun_id=0xb,metadata=0xb0000000000/0xfffff0000000000 actions=drop
table=55, n_packets=2, n_bytes=112, priority=9,tun_id=0xb actions=load:0xb00->NXM_NX_REG6[],resubmit(,220)
table=60, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=80, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=81, n_packets=0, n_bytes=0, priority=100,arp,metadata=0x90000b0000030d40/0xffffff0000fffffe,arp_tpa=10.0.0.254,arp_op=1 actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:52:54:05:49:a8:05->eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],load:0x52540549a805->NXM_NX_ARP_SHA[],load:0xa0000fe->NXM_OF_ARP_SPA[],load:0->NXM_OF_IN_PORT[],load:0xb00->NXM_NX_REG6[],resubmit(,220)
table=81, n_packets=1, n_bytes=42, priority=0 actions=drop
table=90, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,17)
table=211, n_packets=0, n_bytes=0, priority=0 actions=drop
table=212, n_packets=0, n_bytes=0, priority=0 actions=goto_table:213
table=213, n_packets=0, n_bytes=0, priority=62020,ct_state=-new+est-rel-inv+trk actions=resubmit(,17)
table=213, n_packets=0, n_bytes=0, priority=62020,ct_state=-new-est+rel-inv+trk actions=resubmit(,17)
table=213, n_packets=0, n_bytes=0, priority=0 actions=drop
table=220, n_packets=24, n_bytes=2212, priority=9,reg6=0xb00 actions=output:2
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0xa00 actions=load:0xac130003->NXM_NX_TUN_IPV4_DST[],output:1
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x900 actions=load:0xac130002->NXM_NX_TUN_IPV4_DST[],output:1
table=220, n_packets=28, n_bytes=2532, priority=9,reg6=0x90000f00 actions=output:4
table=220, n_packets=28, n_bytes=2532, priority=6,reg6=0xf00 actions=load:0x90000f00->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=220, n_packets=5, n_bytes=390, priority=9,reg6=0x90000e00 actions=output:3
table=220, n_packets=5, n_bytes=390, priority=6,reg6=0xe00 actions=load:0x90000e00->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=230, n_packets=33, n_bytes=2922, priority=0 actions=resubmit(,220)
table=231, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,220)
table=241, n_packets=0, n_bytes=0, priority=0 actions=drop
table=242, n_packets=0, n_bytes=0, priority=0 actions=goto_table:243
table=243, n_packets=0, n_bytes=0, priority=62020,ct_state=-new+est-rel-inv+trk actions=resubmit(,220)
table=243, n_packets=0, n_bytes=0, priority=62020,ct_state=-new-est+rel-inv+trk actions=resubmit(,220)
table=243, n_packets=0, n_bytes=0, priority=0 actions=drop

The packet is ingressing through in_port=4 and thus hitting the flow:

table=0, n_packets=1603, n_bytes=117471, priority=5,in_port=4 actions=write_metadata:0xf0000000001/0xfffff0000000001,goto_table:36

From there it goes to table 36 where it checks that the vni is 1, the default one. This is the same as saying that it should not have any vni. If we wanted to use vni, that would have been set in table=220 of the client-compute

table=36, n_packets=24, n_bytes=2212, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51

From table=51 it goes to table 220 and it egresses on port 2 which is the tap port where the server is connected. And with that the example is finished!

I hope you learnt something and if you have any questions I will be happy to help. You can find me as mbuil in the #opnfv-sfc channel in freenode

———————————————————————————–

[1] – https://www.opennetworking.org/images//openflow-switch-v1.5.1.pdf

[2] – http://docs.opendaylight.org/en/latest/submodules/genius/docs/pipeline.html

Share
(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published.

No comments yet

5,085 views