SUSE Blog

OpenDaylight Carbon SFC pipeline description

Manuel Buil

By: Manuel Buil

July 5, 2017 6:26 am

Reads:1,400

Comments:0

Score:Unrated

Print/PDF

This is the second entry from the ODL Carbon pipeline explanation. If you haven’t read the ODL Carbon pipeline (basics) [1], please do so because it provides important concepts which are not explained in this one.

Apart from L3 and L2 services, some of which were explored in the previous entry, ODL is capable of providing some other network services like SFC or L3VPN. These services are a bit more advanced and their pipelines are more complicated but not impossible to understand. The following description will focus on SFC pipelines. SFC stands for Service Function Chaining which basically is a way to provide a logical network which connects a list of services in a specific order. That logical network is called a chain and there might be multiple of those chains in a deployment. To introduce the packets into the chains, SFC uses a classifier which is a mechanism that inspects the packet network fields (e.g. TCP destination port) and based on that, it pushes the packet into one chain. For more information, read the IETF spec about SFC [2].

There are many ways to implement the IETF architecture. ODL does it through the NSH protocol, which is not yet a standard but a IETF draft [3]. In a few words, NSH adds a new header which stores key information for SFC such as the chain ID (SPI -> Service path identifier), the hop in the chain (SI -> Service Index) or metadata which allows the exchange of information among services. It is important to note that the NSH header is normally added after the vxlan protocol header, either applying a dumb eth header in the middle or without that eth header. That is why, for SFC we must use vxlan-gpe protocol, which is an extension of vxlan. The current ODL Carbon implementation will use the following protocols:

                                   SF
                                   ^^
                                   ||
                               eth + nsh
                                   ||
SFF <== vxlan-gpe + eth + nsh ==> SFF <== vxlan-gpe + eth + nsh ==> SFF

As introduced by the IETF spec for SFC, SFF stands for Service Function Forwarder and represents the switch. SF is the Service Function and represents the service attached to chains.

After this introduction to SFC, it is time to check how the openflow rules apply. Imagine a topology with three VMs and three OVS switches, where each VM is connected to one of the switches. One VM will be our client, another VM will be our server and the third VM will be our SF. We will create one chain that includes that SF and one classification rule. The rule will classify all packets with source ip in the range of 10.0.0.0/24 and tcp destination port 80.

First, we will check the openflow tables in the OVS switch where the client is connected. Note that as we did in the previous post, only the interesting flows without cookies and duration fields  are shown:

OFPST_FLOW reply (OF1.3) (xid=0x2):
table=0, n_packets=245, n_bytes=17909, priority=5,in_port=3 actions=write_metadata:0x40000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=240, n_bytes=17475, priority=5,in_port=4 actions=write_metadata:0xd0000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=15, n_bytes=858, priority=4,in_port=2,vlan_tci=0x0000/0x1fff actions=write_metadata:0x30000000000/0xffffff0000000001,goto_table:17
table=17, n_packets=12, n_bytes=608, priority=10,metadata=0x9000030000000000/0xffffff0000000000 actions=load:0x3->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000031388000000/0xfffffffffffffffe,goto_table:48
table=17, n_packets=9, n_bytes=378, priority=10,metadata=0x8000030000000000/0xffffff0000000000 actions=load:0x186a0->NXM_NX_REG3[0..24],write_metadata:0x9000030000030d40/0xfffffffffffffffe,goto_table:19
table=17, n_packets=9, n_bytes=378, priority=10,metadata=0x30000000000/0xffffff0000000000 actions=write_metadata:0x8000030000000000/0xfffffffffffffffe,goto_table:100
table=19, n_packets=9, n_bytes=378, priority=100,arp,arp_op=1 actions=group:5001
table=19, n_packets=6, n_bytes=480, priority=0 actions=resubmit(,17)
table=36, n_packets=5, n_bytes=390, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51
table=48, n_packets=12, n_bytes=608, priority=0 actions=resubmit(,49),resubmit(,50)
table=50, n_packets=12, n_bytes=608, priority=20,metadata=0x31388000000/0xfffffffff000000,dl_src=02:08:1f:e8:58:5f actions=goto_table:51
table=51, n_packets=17, n_bytes=998, priority=0 actions=goto_table:52
table=52, n_packets=12, n_bytes=608, priority=5,metadata=0x1388000000/0xffff000001 actions=write_actions(group:210000)
table=52, n_packets=5, n_bytes=390, priority=5,metadata=0x1388000001/0xffff000001 actions=write_actions(group:209999)
table=55, n_packets=12, n_bytes=608, priority=10,tun_id=0x3,metadata=0x30000000000/0xfffff0000000000 actions=drop
table=55, n_packets=5, n_bytes=390, priority=9,tun_id=0x3 actions=load:0xe00->NXM_NX_REG6[],resubmit(,220)
table=81, n_packets=9, n_bytes=378, priority=0 actions=drop
table=100, n_packets=0, n_bytes=0, priority=510,tun_gpe_np=0x4 actions=resubmit(,17)
table=100, n_packets=0, n_bytes=0, priority=510,encap_eth_type=0x894f actions=resubmit(,17)
table=100, n_packets=9, n_bytes=378, priority=500 actions=goto_table:101
table=101, n_packets=0, n_bytes=0, priority=500,tcp,in_port=2,nw_src=10.0.0.0/24,tp_dst=80 actions=push_nsh,load:0x1->NXM_NX_NSH_MDTYPE[],load:0x3->NXM_NX_NSH_NP[],load:0x3e->NXM_NX_NSP[0..23],load:0xff->NXM_NX_NSI[],load:0xffffff->NXM_NX_NSH_C1[],load:0->NXM_NX_NSH_C2[],resubmit(,17)
table=101, n_packets=9, n_bytes=378, priority=10 actions=resubmit(,17)
table=220, n_packets=6, n_bytes=432, priority=9,reg6=0x90000600 actions=load:0xac130002->NXM_NX_TUN_IPV4_DST[],output:2
table=220, n_packets=5, n_bytes=406, priority=9,reg6=0x90000e00 actions=load:0xac130004->NXM_NX_TUN_IPV4_DST[],output:2
table=220, n_packets=0, n_bytes=0, priority=8,reg6=0x300 actions=load:0x90000300->NXM_NX_REG6[],load:0xac130002->NXM_NX_REG0[],write_metadata:0/0xfffffffffe,goto_table:221
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x90000300 actions=load:0xac130002->NXM_NX_TUN_IPV4_DST[],output:2
table=220, n_packets=1, n_bytes=42, priority=8,reg6=0x80000600 actions=load:0x90000600->NXM_NX_REG6[],load:0xac130002->NXM_NX_REG0[],write_metadata:0/0xfffffffffe,goto_table:221
table=220, n_packets=6, n_bytes=432, priority=6,reg6=0x600 actions=load:0x80000600->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x90000700 actions=load:0xac130004-
>NXM_NX_TUN_IPV4_DST[],output:2
table=220, n_packets=0, n_bytes=0, priority=8,reg6=0x700 actions=load:0x90000700->NXM_NX_REG6[],load:0xac130004->NXM_NX_REG0[],write_metadata:0/0xfffffffffe,goto_table:221
table=220, n_packets=35, n_bytes=3201, priority=6,reg6=0xe00 actions=load:0x80000e00->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=220, n_packets=34, n_bytes=3131, priority=8,reg6=0x80000e00 actions=load:0x90000e00->NXM_NX_REG6[],load:0xac130004->NXM_NX_REG0[],write_metadata:0/0xfffffffffe,goto_table:221
table=220, n_packets=35, n_bytes=3427, priority=8,reg6=0x200 actions=load:0x90000200->NXM_NX_REG6[],load:0xac130003->;NXM_NX_REG0[],write_metadata:0/0xfffffffffe,goto_table:221
table=220, n_packets=35, n_bytes=3427, priority=9,reg6=0x90000200 actions=output:1
table=221, n_packets=30, n_bytes=2795, priority=260,nshc1=16777215 actions=load:0->NXM_NX_NSH_C1[],goto_table:222
table=221, n_packets=40, n_bytes=3805, priority=250 actions=resubmit(,220)
table=222, n_packets=30, n_bytes=2795, priority=260,nshc1=0,nshc2=0 actions=move:NXM_NX_REG0[]->NXM_NX_NSH_C1[],move:NXM_NX_TUN_ID[0..31]->NXM_NX_NSH_C2[],load:0->NXM_NX_TUN_ID[0..31],goto_table:223
table=222, n_packets=0, n_bytes=0, priority=250 actions=goto_table:223
table=223, n_packets=30, n_bytes=2795, priority=250,nsp=47 actions=load:0xac130002->NXM_NX_TUN_IPV4_DST[],output:2
table=230, n_packets=22, n_bytes=1056, priority=0 actions=resubmit(,220)

The client is connected through in_port=2 and thus the first hit flow is:

table=0, n_packets=15, n_bytes=858, priority=4,in_port=2,vlan_tci=0x0000/0x1fff actions=write_metadata:0x30000000000/0xffffff0000000001,goto_table:17

Then, in table 17 (remember, this is the service dispatcher table), the metadata is checked and the packet is sent to table 100 which is where the ingress classifier pipeline starts:

table=17, n_packets=9, n_bytes=378, priority=10,metadata=0x30000000000/0xffffff0000000000 actions=write_metadata:0x8000030000000000/0xfffffffffffffffe,goto_table:100

Here comes the new stuff. The flows in table=100 make sure that the packets which are already classified don’t get classified again. If a packet is detected to be already classified, it is sent back to table=17. To do this detection, the flows check if the NSH header is present:

table=100, n_packets=0, n_bytes=0, priority=510,tun_gpe_np=0x4 actions=resubmit(,17)
table=100, n_packets=0, n_bytes=0, priority=510,encap_eth_type=0x894f actions=resubmit(,17)

It does it in two tables because the packet might be a vxlan-gpe+eth+nsh or a eth+nsh packet. In both cases, the flow checks the next protocol which points to NSH (e.g. 0x894f represents NSH as next protocol). If the packet does not have a NSH header, it is considered non-classified and moves to the next table:

table=100, n_packets=9, n_bytes=378, priority=500 actions=goto_table:101

Table=101 is where the classification happens. The most important flow is the following one:

table=101, n_packets=0, n_bytes=0, priority=500,tcp,in_port=2,nw_src=10.0.0.0/24,tp_dst=80 actions=push_nsh,load:0x1->NXM_NX_NSH_MDTYPE[],load:0x3->NXM_NX_NSH_NP[],load:0x3e->NXM_NX_NSP[0..23],load:0xff->NXM_NX_NSI[],load:0xffffff->NXM_NX_NSH_C1[],load:0->NXM_NX_NSH_C2[],resubmit(,17)

In order understand that flow, let me explain the different parameters of the classification flow:

  • tcp,in_port=2,nw_src=10.0.0.0/24,tp_dst=80 : The classifier matches on four parameters. It will classify tcp packets coming from the in_port=2 which have source ip in the range of 10.0.0.0/24 and the destination tcp port is 80, i.e. HTTP.
  • push_nsh : If the packet matches the classification rule, a nsh header is inserted (actually it will be “eth + nsh” or “vlxna-gpe+eth+nsh”. This will depend on the egress port through which the packet leaves the switch)
  • load:0x1->NXM_NX_NSH_MDTYPE[] : Sets the NSH MDTYPE to 0x1 (this is a field in the NSH header)
  • load:0x3->NXM_NX_NSH_NP[] : Sets the NSH next protocol to 0x3 which refers to ethernet
  • load:0x3e->NXM_NX_NSP[0..23] : Sets the SPI to 0x3e. SPI is also called NSP and identifies the chain
  • load:0xff->NXM_NX_NSI[] : Sets the SI to 255. SI is also called NSI and specifies the hop in the chain
  • load:0xffffff->NXM_NX_NSH_C1[],load:0->NXM_NX_NSH_C2[] : Sets the first and second metadata fields to 0xffffff and 0. ODL Carbon uses these fields to store the needed information to forward the packet to the right place when it egresses the SFC pipelines. C1 stores the VTEP of the tunnel where the packet should go and C2 the VNI of that tunnel. At this point the 0xffffff value in C1 is only used to specify that the packet is classified. We will see it again in table=221.

When the packet does not match the classification rule, it is just sent to the service dispatcher table to access the next service:

table=101, n_packets=9, n_bytes=378, priority=10 actions=resubmit(,17)

The packet will then follow the tables containing the L2 pipeline which were already explained in the ODL Carbon pipelines basics:  17 –> 19 –> 17 –> 48 –> 51 –> 220

As explained in the previous post, table=220 is the egress dispatcher table and when adding the SFC service, it changes a bit since it eventually moves the packet to table 221, which is the start table for the egress classifier pipeline.

Let’s analyze it. First, in table=220, the classified packet hits the flow:

table=220, n_packets=35, n_bytes=3201, priority=6,reg6=0xe00 actions=load:0x80000e00->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230

That one goes to table 230 which returns it to 220:

table=230, n_packets=22, n_bytes=1056, priority=0 actions=resubmit(,220)

Then it hits the table that brings the packet to the egress classifier pipeline (starting in 221):

table=220, n_packets=34, n_bytes=3131, priority=8,reg6=0x80000e00 actions=load:0x90000e00->NXM_NX_REG6[],load:0xac130004->NXM_NX_REG0[],write_metadata:0/0xfffffffffe,goto_table:221

Keep in mind that we are storing an IP into REG0 (0xac130004) which points to the VTEP of the destination. This value will be set in table=222.

Table 221 is a very important table because it filters out the packets which have nothing to do with SFC. Due to the design of ODL pipeline (in case you want to know more, search for Netvirt or Genius projects in ODL), it is not possible to know at deployment time the egress port for SFC and thus, when programming the classification service, the egress part is bound to all the egress interfaces. That means, that all packets which are about to leave OVS, will traverse through the egress classifier pipeline. Table=221 is acting as a guard and only allows NSH packets with C1=0xffffff (remember from table=101?) to continue to table=222. In other words, only NSH packets which were just classified in this switch will continue.

table=221, n_packets=30, n_bytes=2795, priority=260,nshc1=16777215 actions=load:0->NXM_NX_NSH_C1[],goto_table:222
table=221, n_packets=40, n_bytes=3805, priority=250 actions=resubmit(,220)

Table=222 is the one responsible to set the correct C1 and C2 values. As explained above, these values specify the VTEP and the VNI of the tunnel where the packet should go when it egresses the SFC chain. The reason for this is that when classifying, SFC hijacks the packet and could return it to the L2 or L3 service pipeline in a different SFF. That different SFF might be instantiated in a OVS which does not have a route to reach the original destination. That is why, we store  the information about how to reach the server persistently in those fields.

table=222, n_packets=30, n_bytes=2795, priority=260,nshc1=0,nshc2=0 actions=move:NXM_NX_REG0[]->NXM_NX_NSH_C1[],move:NXM_NX_TUN_ID[0..31]->NXM_NX_NSH_C2[],load:0->NXM_NX_TUN_ID[0..31],goto_table:223
table=222, n_packets=0, n_bytes=0, priority=250 actions=goto_table:223

C1 gets the IP from REG0 and C2 gets the VNI of the tunnel. NXM_NX_TUN_ID sets the VNI for the vxlan-gpe header to reach the next hop. In this case it is 0.

Finally, table=223 sets the VTEP for the next hop and egresses the packet on the vxlan-gpe port which in this case is number 2.

table=223, n_packets=30, n_bytes=2795, priority=250,nsp=47 actions=load:0xac130002->NXM_NX_TUN_IPV4_DST[],output:2

The packet will now enter the OVS where the SF is sitting. Here, we will access the SFF pipeline which will send the packet to the SF. Then, as this SFF is the last one, it will decapsulate the packet and send it to the correct compute using the values in C1 and C2. Let’s check the flows:

table=0, n_packets=0, n_bytes=0, priority=5,tun_src=172.19.0.3,in_port=2 actions=write_metadata:0x60000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=12792, n_bytes=933933, priority=5,in_port=3 actions=write_metadata:0x50000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=12801, n_bytes=930696, priority=5,in_port=4 actions=write_metadata:0xe0000000001/0xfffff0000000001,goto_table:36
table=0, n_packets=5, n_bytes=390, priority=4,in_port=1,vlan_tci=0x0000/0x1fff actions=write_metadata:0x10000000000/0xffffff0000000001,goto_table:17
table=17, n_packets=3, n_bytes=230, priority=10,metadata=0x9000010000000000/0xffffff0000000000 actions=load:0x1->NXM_NX_REG1[0..19],load:0x1388->NXM_NX_REG7[0..15],write_metadata:0xa000011388000000/0xfffffffffffffffe,goto_table:48
table=19, n_packets=3, n_bytes=230, priority=0 actions=resubmit(,17)
table=36, n_packets=8, n_bytes=620, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51
table=36, n_packets=3808, n_bytes=426496, priority=5,tun_id=0 actions=goto_table:83
table=48, n_packets=3, n_bytes=230, priority=0 actions=resubmit(,49),resubmit(,50)
table=50, n_packets=3, n_bytes=230, priority=20,metadata=0x11388000000/0xfffffffff000000,dl_src=a6:01:eb:90:02:d4 actions=goto_table:51
table=51, n_packets=11, n_bytes=850, priority=0 actions=goto_table:52
table=52, n_packets=3, n_bytes=230, priority=5,metadata=0x1388000000/0xffff000001 actions=write_actions(group:210000)
table=52, n_packets=8, n_bytes=620, priority=5,metadata=0x1388000001/0xffff000001 actions=write_actions(group:209999)
table=55, n_packets=3, n_bytes=230, priority=10,tun_id=0x1,metadata=0x10000000000/0xfffff0000000000 actions=drop
table=55, n_packets=8, n_bytes=620, priority=9,tun_id=0x1 actions=load:0x100->NXM_NX_REG6[],resubmit(,220)
table=83, n_packets=0, n_bytes=0, priority=250,nsp=62 actions=goto_table:86
table=83, n_packets=3802, n_bytes=425824, priority=5 actions=resubmit(,17)
table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=62 actions=load:0xc68f3f28f0bb->NXM_NX_ENCAP_ETH_SRC[],load:0xa601eb9002d4->NXM_NX_ENCAP_ETH_DST[],goto_table:87
table=86, n_packets=0, n_bytes=0, priority=5 actions=goto_table:87
table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=62 actions=load:0x100->NXM_NX_REG6[],resubmit(,220)
table=87, n_packets=7, n_bytes=871, priority=650,nsi=254,nsp=62 actions=move:NXM_NX_NSH_C1[]->NXM_NX_TUN_IPV4_DST[],move:NXM_NX_NSH_C2[]->NXM_NX_TUN_ID[0..31],pop_nsh,output:2
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x90000500 actions=output:3
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x90000e00 actions=output:4
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x90000100 actions=output:1
table=220, n_packets=0, n_bytes=0, priority=9,reg6=0x90000600 actions=load:0xac130003->NXM_NX_TUN_IPV4_DST[],output:2
table=220, n_packets=0, n_bytes=0, priority=6,reg6=0xe00 actions=load:0x80000e00->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=220, n_packets=0, n_bytes=0, priority=6,reg6=0x500 actions=load:0x80000500->NXM_NX_REG6[],write_metadata:0/0xfffffffffe,goto_table:230
table=221, n_packets=0, n_bytes=0, priority=260,nshc1=16777215 actions=load:0->NXM_NX_NSH_C1[],goto_table:222
table=221, n_packets=0, n_bytes=0, priority=250 actions=resubmit(,220)
table=222, n_packets=0, n_bytes=0, priority=260,nshc1=0,nshc2=0 actions=move:NXM_NX_REG0[]->NXM_NX_NSH_C1[],move:NXM_NX_TUN_ID[0..31]->NXM_NX_NSH_C2[],load:0->NXM_NX_TUN_ID[0..31],goto_table:223
table=222, n_packets=0, n_bytes=0, priority=250 actions=goto_table:223
table=223, n_packets=0, n_bytes=0, priority=260,nsp=62 actions=resubmit(,83)
table=230, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,220)

The first flow that hits is the one coming from in_port 2 which is the vxlan-gpe port.

table=0, n_packets=0, n_bytes=0, priority=5,tun_src=172.19.0.3,in_port=2 actions=write_metadata:0x60000000001/0xfffff0000000001,goto_table:36

Notice that the packet is not going to table=17 anymore. It goes to table=36 which takes care of internal tunnel processing. Table=36 differentiates among services using the vni. This is something which will probably change in the next releases because it is not very reliable. Just in case you want to learn more, the mechanism that makes the packet move from table=0 to table=36 and differentiates between services using the vni is called “terminating service action”. For this release, tun_id=0 represents SFC (remember it was set in table=222 of the previous OVS). Here are the flows:

table=36, n_packets=8, n_bytes=620, priority=5,tun_id=0x1 actions=write_metadata:0x1388000000/0xfffffffff000000,goto_table:51
table=36, n_packets=3802, n_bytes=425824, priority=5,tun_id=0 actions=goto_table:83

The internal tunnel pipeline detects that the packet belongs to SFC and thus it sends it to table=83 which is the starting table for the SFF pipeline.

table=83, n_packets=0, n_bytes=0, priority=250,nsp=62 actions=goto_table:86
table=83, n_packets=3802, n_bytes=425824, priority=5 actions=resubmit(,17)

When the packets reach table=83, they should match on the nsp. However, we added an extra flow which would send the packet back to the dispatcher table in case a lost packet (non-NSH) arrived to this table for unknown reasons. Therefore, the interesting packets should move to table=86, which contains the following flows:

table=86, n_packets=0, n_bytes=0, priority=550,nsi=255,nsp=62 actions=load:0xc68f3f28f0bb->NXM_NX_ENCAP_ETH_SRC[],load:0xa601eb9002d4->NXM_NX_ENCAP_ETH_DST[],goto_table:87
table=86, n_packets=0, n_bytes=0, priority=5 actions=goto_table:87

The table=86 is responsible to set the destination to the next SF. Checking on nsi provides the information about the hop of the chain where we are. In this case there is only nsi=255 but if there was another SF connected to this switch, we would have another flow checking on nsi=254. If you haven’t read [3], note that the nsi is decremented in the SF. As we can observe, it sets the source mac address (which is the OVS) and the destination mac address (which is the SF mac address). WARNING! This is modifying the ethernet header before NSH, not the ethernet header after NSH, that’s the one of the original packet and we don’t want to mess with it.

Finally table=87 is the one preparing the packet for egressing:

table=87, n_packets=0, n_bytes=0, priority=650,nsi=255,nsp=62 actions=load:0x100->NXM_NX_REG6[],resubmit(,220)

Remember from the previous post that table=220 will check REG6 to decide the egress port where to send the packet, that is why load:0x100->NXM_NX_REG6[] is done. The packet is resubmitted to table 220 where the different steps that we already described will be followed: 220 –> 230 –> 220 –> egress(port:1)

The packet will come back from the SF with the nsi=254 and will follow the tables: 0 –> 36 –> 83 –> 86 –> 87, which is exactly the same path it followed before. However, note that in table 86 it will match the default flow because nsi=254:

table=86, n_packets=0, n_bytes=0, priority=5 actions=goto_table:87

And in table 87, it will match the nsi=254 flow:

table=87, n_packets=7, n_bytes=871, priority=650,nsi=254,nsp=62 actions=move:NXM_NX_NSH_C1[]->NXM_NX_TUN_IPV4_DST[],move:NXM_NX_NSH_C2[]->NXM_NX_TUN_ID[0..31],pop_nsh,output:2

C1 will set the IP destination and C2 the vni of that tunnel. Then the nsh header is popped and the packet is egressed egressing through the correct vxlan tunnel. Finally the packet will reach the OVS of the server where it will traverse the L3 and L2 pipelines to finally reach the server.

I hope after these two posts, everyone understands a bit better how ODL Carbon works. I would like to thank the Netvirt team and the SFC team in ODL which helped me understand all these concepts and deploy it successfully in OPNFV. In case of questions, you can find me as mbuil in the #opnfv-sfc channel in freenode

————————————————————————————————-

[1] – ODL Carbon pipeline (basics): https://www.suse.com/communities/blog/opendaylight-carbon-pipeline-description/

[2] – IETF SFC Architecture: https://tools.ietf.org/html/rfc7665

[3] – IETF NSH Draft: https://datatracker.ietf.org/doc/html/draft-ietf-sfc-nsh-12

0 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 5 (0 votes, average: 0.00 out of 5)
You need to be a registered member to rate this post.
Loading...

Categories: Technical Solutions

Disclaimer: As with everything else in the SUSE Blog, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

Comment

RSS