The topic of Xen Networking is an area that is somewhat muddled. A simple Google search shows that many people have written on this topic, with very few of them understanding how Xen networking works. This article was written with the goal of defining how the networking works and why certain behavior exists.

This document only applies to SLES 10 SP1 and earlier. The networking pieces for SLES 10 SP2 and later are being reworked to eliminate a good portion of the confusion — specifically netloops are disappearing, which should bring a speed improvement and reduce the complexity.

Specifically, this document addresses the issues of multiple NIC cards and bonding

backend pieces considerations

Understanding the Xen networking backend pieces will aide in the problems that crop up. The following outlines what happens when the default Xen networking script runs on single NIC system:

  1. the script creates a new bridge named xenbr0
  2. “real” ethernet interface eth0 is brought down
  3. the IP and MAC addresses of eth0 are copied to virtual network interface veth0
  4. real interface eth0 is renamed peth0
  5. virtual interface veth0 is renamed eth0
  6. peth0 and vif0.0 are attached to bridge xenbr0 as bridge ports
  7. the bridge, peth0, eth0 and vif0.0 are brought up

The process works wonderfully if there is only one network device present on the system. When multiple NICs are present, this process can be confused or limitations can be encountered.

In this process, there is a couple of things to remember

  • pethX is the physical device, but it has no MAC or IP address
  • xenbrX is the bridge between the internal Xen virtual network and the outside network, it does not have a MAC or IP address
  • vethX is a usuable end-point by either Dom0 or DomU and may or may not have an IP or MAC address
  • vifX.X is a floating end-point for vethX’s that is connected to the bridge
  • ethX is a renamed vethX that is connected to xenbrX via vifX.X and has an IP and MAC address


In the process of bringing up the networking, veth and vif pairs are brought up. For each veth device, there is a coresponding vif device. The veth devices are given to the DomU’s, while the corresponding vif device is attached to the bridge. By default, seven of the veth/vif pairs are brought up. Each physical device consumes a veth/vif pair, thereby reducing the number of veth/vifs available for DomU’s.

When a new DomU is started, a free veth/vif pair is used. The vif device is given to the DomU and is presented within DomU as ethX. (note: the veth/vif bridge is loosely like an ethernet cable. The veth end is given to either Dom0 or DomU and the vif end is attached ot the bridge)

For most installations, the idea of having seven virtual machines run at the same time is impractiable. However, for each NIC card there has to be bridge, peth, eth and vif device. Since eth and vif devices are pseudo devices, the number of netloops is decremented for each physical NIC beyond the assume single NIC.

  • With one NIC, 7 veth/vif pairs are present
  • Two NICs will reduce the veth/vif pairs available to 5
  • Two NICs bonded will reduce the veth/vif available to 4
  • Three NICs will reduce the veth/vifs available to 3
  • Three NICs bonded presented as a single bond leaves 0 veth/vifs available
  • Four NICs will result in a deficit of -1 veth/vifs
  • Four NICs bonded into one bond results in a deficit of -3 veth/vifs
  • Four NICs bonded into two bonds results in a deficit of -4 veth/vifs

Where most people run into problems is with bonding. The network-multinet script enables the use of bonding. It is easy to see where one could run into trouble with multiple NICs.

The solution is to increase the number of netloop devices, thereby increasing the number of veth/vif pairs available for use.

increase the number of netloops

  1. In /etc/modprobe.d create and open a file for editing called “netloop”
  2. Add the following line to it
    options netloop nloopbacks=32
  3. Save the file
  4. Reboot to activate the setting

It is recommended to increase the number of netloops in any situation where multiple NICs are present. When a deficit of netloops exist, sparadioc and odd behavior have been observed including completely broken networking configuration.

increasing the number of bridges

By default only a single bridge is brought up. The name of this bridge is defined by the physical NIC where the bridge is. In the case of eth0, it is xenbr0 and in eth1’s case, xenbr1.

Adding a more bridges is a trivial operation.

option 1: using a custom wrapper script
This method relies on the Xen script, network-bridge to create more bridges. This method does not support bonding.

  1. Open a new file for editing, /etc/xen/scripts/my-bridges
  2. Populate the new file with the following:
    dir=$(dirname "$0")
    Add "$dir/network-bridge" "$@" vifnum= for each bridge. For example to add three bridges:
    "$dir/network-bridge" "$@" vifnum=0
    "$dir/network-bridge" "$@" vifnum=1
    "$dir/network-bridge" "$@" vifnum=2
  3. Save and close the file
  4. Make it executable by root
    chmod 0755 /etc/xen/scripts/my-bridges
  5. Create a backup of /etc/xen/xend-config.sxp
  6. Open for editing /etc/xen/xend-config.sxp
  7. Locate the line below and comment it out
    (network-script network-bridge)
  8. Add the following line
    (network-script my-bridges)
  9. Save the file and close it
  10. Reboot to activate
  11. Upon reboot you “ifconfig” should display the new Xen bridges

option 2: use the network-multinet script

The default script that will be used in SLES 10 SP2 is the network-multinet script. The multinet script has some major improvements and it supports the use of bonding, as well as routed, NAT, host only and no host networks. As of SLES 10 SP1, has full support through Novell Technical Services. In SLES 10 SP2, network-multinet will have full support through SuSE development.

  1. Download the network-multinet script from
    Place the files in the following location:

    network-multinet in /etc/xen/scripts in /etc/xen/scripts
    xend in /etc/sysconfig
  2. Make the multinet scripts executable
    chmod 0755 /etc/xen/scripts/*multi*
  3. Edit /etc/sysconfig/xend. The file self-documents the configuration
  4. Create a backup of /etc/xen/xend-config.sxp
  5. Open for editing /etc/xen/xend-config.sxp
  6. Locate the line below and comment it out
    (network-script network-bridge)
  7. Add the following line
    (network-script network-multinet)
  8. Save the file and close it
  9. Reboot to activate
  10. Upon reboot you “ifconfig” should display the new Xen bridges

routing issues

Another common issue that is seen in the Xen realm of networking is related to routing. Usually the following may be observed:

  • Dom0 is able to ping the gateway, but no further
  • Dom0 in unable to ping other hosts, but DomU’s can
  • Dom0 can only ping hosts on the same subnet, but DomU’s can ping both in and outside the subnet

Troubleshooting this issue is fairly simple. The command “route -n” will identify the problem. The following is an example of a correct setup:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface   U     0      0        0 eth0     U     0      0        0 eth0       U     0      0        0 lo         UG    0      0        0 eth0

Incorrect examples:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface   U     0      0        0 peth0     U     0      0        0 peth0       U     0      0        0 lo         UG    0      0        0 peth0


Destination     Gateway         Genmask         Flags Metric Ref    Use Iface   U     0      0        0 xenbr0     U     0      0        0 xenbr0       U     0      0        0 lo         UG    0      0        0 xenbr0

The reason why the last two examples are incorrect is due to the way that the Xen Networking is setup. When the Xen networking is brought up a network bridge is created with the physical network device as a bridge port. A new virtual/pseudo network device is setup an is connected as another bridge port. The virtual device becomes the default network interface, while the bridge is more-or-less a Layer 2 bridge/switch (it is only concerned with the Ethernet frame which identifies the source and destination MAC address), and the physical ethernet device is the link between the bridge and the outside network. Both the bridge and the peth device lack a MAC address and an IP address, making them Layer 2 capable devices only.

IP routing happens at Layer3 of the OSI model. Since xenbrX and pethX are Layer 2 devices, the correct default route must hapeen at a Layer 3 capable device. In the case of Xen networking, the only Layer 3 capable devices are the veth devices. Since ethX inside of Dom0 is a veth device, the default route must be defined on it.

Unique configuration involving multiple bonds or other exotic configurations has demonstated that the default route is incorrectly set. The fix is straight forward:

  1. Open /etc/sysconfig/network/routes for editing. It may be blank.
  2. Replace any text or add the following:
    default GATEWAY_IP_ADDRESS - -
  3. Save the file
  4. Reboot
  5. Check the route by attempting to ping outside the box

speed implications of a network bridge

Network bridges are slower than switches or routers. A bridge works by processing all the ethernet frames that it gets on both sides of the bridge; the bridge looks at the ethernet frames and determines what the destination MAC address is. If the destination MAC address is known to be on the other side of the bridge, then the bridge forwards the frame. Otherwise, the frame is not forwarded.

As a result, the following behavior will be seen

  • Broadcast traffic will be forwarded through
  • Multicat traffic will be forwarded through
  • There is no increased security as a result of the bridge
  • Packet sniffing will only capture the frames within the bridge and what is forwarded through the bridge
  • The bridge is limited to about 4096 MAC addresses

A natural consequence of the Xen Networking design is that each frame that comes accross the wire will be evaluated by the bridge to see if it should be passed through the bridge. Networks under heavy load, may particularly experience high latency in communicating with Xen DomU’s. Again in SLES 10 SP2 this piece will be looked reworked. Until then the following suggestions may help:

  • Place the Xen Dom0 and DomU on a small subnet to limit broadcast messages
  • Instead of Xen bridging use a routed Xen network
  • Limit the scope of multicast domains that the Xen Dom0 is exposed to
  • Pass the network card to DomU’s directly
  • Limit the services running on Dom0 to the minimial amount needed
  • Dedicate a CPU to Dom0 exclusively
(Visited 1 times, 1 visits today)
Tags: ,
Category: SUSE Linux Enterprise Server, Technical Solutions
This entry was posted Friday, 21 March, 2008 at 2:14 pm
You can follow any responses to this entry via RSS.


  • raghun says:

    Hi There,

    I have tried both the options mentioned above and both are not working.

    When I tried Option1, we are getting only one bridge for eth1. After modifying the “my-bridges” file as below mentioned its working greatly.


    dir=$(dirname “$0”)

    “$dir/network-bridge” “$@” vifnum=0 netdev=eth0

    “$dir/network-bridge” “$@” vifnum=1 netdev=eth1

    When I tried Option2, we are not even getting single bridge.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *