SUSE Conversations


XEN Networking (inc. bonding) with an IBM BladeCenter



By: samprior

July 11, 2008 2:52 pm

Reads:410

Comments:0

Rating:0

Architecture:

IBM BladeCenter E with LS41 double wide blades. Chassis I/O components include 2 x Nortel Networks Ethernet switch modules and 2 x Cisco Fibre Channel switch modules.

Novell is working hard, along with the rest of the XEN community, to significantly improve and simplify the network configuration necessary on the host sever and virtual machines. To be honest the default out of the box configuration generally works fine, particularly with paravirtualised virtual machine operating systems such as SLES 10 SP1 and OES 2. However, in a more complex environment you may need to modify a number of files to get XEN networking functioning as you require.

Recently I have seen some articles and forum posts stating that Novell are to integrate the popular network-multinet and multinet-common networking scripts into the default XEN installation in order to improve support for bonding network connections in a XEN environment. However my understanding, after discussions with Novell Technical Services and other senior Novell technical experts, is that this is no longer the planned route as there have been issues when using these scripts with Heartbeat. Therefore the likelihood is that the standard network-bridge script will instead be improved to better support bonding. I have found the scripts in SP1 more than adequate for bonding however they do need some adjustment.

Blade Specifics

This article has been written on the back of an implementation I carried out on blade servers however the steps below should work just as well on a standard physical server. Be aware that bonding network connections, regardless of whether or not XEN is being used, is slightly different when using blade servers. In particular to the IBM BladeCenter note the following (this information is, in the main, an extract from the Linux Foundation web site):

  • On blade servers the bonding driver supports only balance-rr, active-backup, balance-tlb and balance-alb modes. This is largely due to the network topology inside the BladeCenter.
  • Each I/O Module may contain either a switch or a passthrough module (which allows ports to be directly connected to an external switch).
  • Normally, Ethernet Switch Modules (ESMs) are used in I/O modules 1 and 2. In this configuration, the eth0 and eth1 ports of a blade will be connected to different internal switches (in the respective I/O modules).A passthrough module (PM) connects the I/O module directly to an external switch. By using PMs in I/O module #1 and #2, the eth0 and eth1 interfaces of a blade can be redirected to the outside world and connected to a common external switch.Depending upon the mix of ESMs and PMs, the network will appear to bonding as either a single switch topology (all PMs) or as a multiple switch topology (one or more ESMs, zero or more PMs)
  • Due to the architecture above the balance-rr mode requires the use of passthrough modules for devices in the bond, all connected to an common external switch. That switch must be configured for “etherchannel” or “trunking” on the appropriate ports, as is usual for balance-rr.
  • The balance-alb and balance-tlb modes will function with either switch modules or passthrough modules (or a mix). The only specific requirement for these modes is that all network interfaces must be able to reach all destinations for traffic sent over the bonding device (i.e., the network must converge at some point outside the BladeCenter).
  • The active-backup mode has no additional requirements. After extensive testing and discussions with IBM, Novell and Nortel I believe this is the most stable option when using ESMs. However when an ESM is in place, only the ARP monitor will reliably detect link loss to an external switch. This is nothing unusual, but examination of the BladeCenter cabinet would suggest that the “external” network ports are the ethernet ports for the system, when it fact there is a switch between these “external” ports and the devices on the blade system itself. The MII monitor is only able to detect link failures between the ESM and the blade system. When a passthrough module is in place, the MII monitor does detect failures to the “external” port, which is then directly connected to the blade. I have however been able to get link failover to work successfully with the MII monitor option. This has been achieve by a special feature (Trunk Failover) available on some of the IBM switch modules (Nortel Networks in this case) that will provide feedback to the internal connections, such that a failure on the external uplinks can be relayed back to the internal server facing links. This allows the use of MII monitor to detect an external uplink failure.
  • The Serial Over LAN (SoL) link is established over the primary ethernet (eth0) only, therefore, any loss of link to eth0 will result in losing your SoL connection. It will not fail over with other network traffic, as the SoL system is beyond the control of the bonding driver.
  • It may be desirable to disable spanning tree on the switch (either the internal Ethernet Switch Module, or an external switch) to avoid fail-over delay issues when using bonding.

Implementation

Each of my LS41 blade servers has two network cards each with two ports (therefore 4 network ports in total). If I place all of these ports into a single active-backup bond I will only ever be using one of the available four ports. Therefore there is only 1Gbps of throughput out of an available 4Gbps. To get around this I intend to create two bonds containing two network ports each and running in the active-backup mode. This will provide STABLE redundancy to my XEN host server as well as utilising as much of the available bandwidth as possible.

  1. Boot the system using the XEN Linux kernel.
  2. In a server console type /etc/xen/scripts/network-bridge stop.
  3. Open YaST and select the Network Card module. Select the method type (Traditional should be used) and then click Next.
  1. Select the first network card and click on the Edit button.
  2. Under the General tab set the Device Activation option to Never. Under the IP setup tab select the None option. Click Next. This will return you to the Overview screen.

 

  1. Repeat steps 4 and 5 for each network card. Once you have made these changes to all the network cards you will be returned to the Overview Screen. Click on Finish to ensure YaST runs the necessary scripts to update the system with the changes.
  2. Repeat step 3.
  3. Click on the Add button. Set the Device Type to bond and the configuration name to 0 (default). Click Next.
  1. Configure the bond to use static IP information, set the hostname and name server and default gateway. In the bonding options drop down select “mode=active-backup” and then type “miimon=100” into the box after the mode e.g. mode=active-backup miimon=100. This sets the interval that the system will monitor the link to ensure it is active. Ensure the firewall zone is set correctly under the General tab.Select the two network ports that you want to add to the bond. If your using an IBM BladeCenter you can use the Hardware VPD page in the IBM AMM to discern which MAC address belongs to which port. Use port one from the base unit (connects to ESM1) and port two from the MPE unit (connects to ESM2). This creates redundancy across the base and MPE units while ensuring each bond can communicate with each switch (this is a requirement of bonding – each bond must be able to see each switch). Click on Finish.
  1. Repeat steps 8 and 9 to add a second bond (for your other two ports) ensuring this time you use 1 as the configuration name and select the correct MAC addresses/ports for this bond. You also need to use a different IP address. Finish and close all YaST screens.
  2. We need to use a custom network script to enable bonding on the XEN host server. Below is the contents of the network-samp file I have attached to this article. This script basically creates two XEN network bridges rather than the standard single bridge (xenbr0). It attaches each of the bonds to a different bridge. I was unable to find a reliable way to attach multiple bonds to the same bridge. Any one got any ideas? Adding both bonds to one bridge created a loop on my network.
    #!/bin/sh
    dir=$(dirname "$0")
    "$dir/network-bridge" "$@" vifnum=0 netdev=bond0 bridge=xenbr0
    "$dir/network-bridge" "$@" vifnum=1 netdev=bond1 bridge=xenbr1

    I would recommend creating a new file in the /etc/xen/scripts folder, which is the default location for XEN networking scripts. Then re-type the lines above into the file and save it. This avoids any character set/encoding issues between Windows and Linux.

    You must ensure that the file has the correct rights so that it can be executed by the XEN daemon. To do this open a server terminal and type the following commands:

    cd /etc/xen/scripts
    	chmod 755 network-samp
  3. We now need to modify the xend-config.sxp file (which is the main configuration file for XEN) to run our custom script when setting up networking rather than the default script. To do this open the file in vi or gedit and change the (network-script network-bridge) line to (network-script network-samp)
  4. You can manually enable your new networking configuration however to ensure everything is working I would recommend rebooting the server.
  5. Once the server has rebooted you can do the following to ensure the configuration is correct:
    • At the console type brctl show. This should display the two bridges created by our custom script. As you can see bond0 is attached to xenbr0 and bond1 is attached to xenbr1.
    • From the server send test pings to a network device via each interface. To do this start two server terminals. In the first type ping –l xenbr0 <IP_addr>. In the second type ping –l xenbr1 <IP_addr>. From the network device ping the two IP addresses configured on your bonds. Then trying turning off ESMs and removing network connections. A few packets may be lost while the network devices find the alternative route but this should be minimal (3 or 4 packets).

For ultimate redundancy ensure you create two network cards in each of your virtual machines. Attach one of them to the first bond (xenbr0) on your host and the other to (xenbr1) on your host. This ‘moves’ the redundancy of your network connections to the virtual machine layer.

You may need to change the configuration on the ESM/PM modules in your chassis and/or your external network switch before any bonding configurations will work smoothly. That is beyond the scope of the article. Have fun!

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Tags: , , ,
Categories: SUSE Linux Enterprise Server, Technical Solutions, Virtualization

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

Comment

RSS