The purpose of this information is to document the steps involved in establishing the bonding of bridged network interfaces of virtual guest machines (OES2-SLES 10 SP2) running on SUSE 11 XEN virtual host servers using the GUI interface to provide load balancing and fault tolerance. There are other very good notes related to bonding physical interfaces and also related to tuning of bridged XEN interfaces. There is also a very good note on using bonding on SLES 10 SP1 that is a good reference as well that I didn’t notice until I was preparing to submit this. I have included the links at the end of this note. Again, the purpose of this is to explain the bonding of the bridged virtual interfaces for the virtual guest machines to load balance and have fault tolerance.
At the time of this writing, the following versions of OS software are used—SLES11 64 bit version for the host machines and OES2 SP1/SLES 10 SP2 64 bit for the guest machines—in preparation for the migration of our physical Netware Servers to the new physical equipment as virtual OES2 Linux servers. Realizing that the computer field is one that is constantly changing, the versions may change somewhat, but the basic concepts/configuration would be the same.
For starters, download SLES 11 from Novell’s site and install it with the XEN option selected in the Software Patterns section. You will then have SLES 11 with XEN installed on the physical (host) box. Near the end of the install, you will notice that DHCP is assigned to the XEN bridged interfaces and not the physical ones. Change the settings on the XEN bridged interfaces to the desired static settings and also specify DNS and Routing info as needed. (IP info marked over by me)
After specifying XEN either manually or via the Boot Loader, after a reboot you should now have the Virtual Machine Manager available to you within YaST. Install the VM to your specifications (mine OES2 SLES 10 SP2) by using Virtual Machine Manager. After the guest is installed, highlight the VM in Virtual Machine Manager and click on details and then the Hardware tab.
Select Add, then Network card from the drop down window and then click on forward. On this screen you need to specify Virtual network, and then browse to the unused bridge (ie br0 was likely installed as the network bridge on the install of the guest machine and you would now select and add br1), then click on forward and then finish. You should now see a second NIC card listed under the hardware section on the details page of the VM.
Close the details page.
Open and start the VM and then access YaST|Network Devices|Network Card and then click on next on the Network Setup Method page. The following page is the Network Card Configuration Overview and you should now see two (you could add more) XEN Virtual Ethernet Cards (mine had 0&1). Next, you need to make sure that they show as Not Configured. If they show as DHCP or otherwise, highlight each one and click delete—this will delete the configurations, but not the XEN Virtual Ethernet Cards.
While still on the Network Card Configuration Overview page, click Add and highlight Bond Network use the default configuration number of 0 for the Configuration Name and click next.
On the next page, you select both of the bond slaves (more than two if you have set them up) and in the drop down list for Bond Driver Options set them up as desired. The links below give good information as far as the different types of bonding for throughput and failover. Care and caution should be used to make sure you set it up as desired. In my case, I used “mode-balanced-rr miimon=100 use_carrier=0” I tested this by pinging the bonded IP address from a remote host and disconnected the associated cables to verify that I could still reach the IP of the bonded interface when either of them was connected.
The option selected via the drop down box and others that you may add will be largely dependent upon your infrastructure and goals of whether you want to load balance, provide fault tolerance, do both, etc. In my case I wanted to demonstrate the easiest method to provide both. In the near future, when one of the links will be changed to 10G and the other will still be 1G, I will change these settings to meet the needs of that environment—I would no longer use mode-balanced-rr, but would choose either to have 10 G be the selected path with the 1G as a redundant link, or load balance across them with ~90% of the traffic using the 10G and ~10% using the 1G link. The settings chosen would also have to depend on the switch that they are connected to. If your infrastructure supports it, 802.3ad is a good choice.
On this page you also must specify the IP Address of Bonded Interface, Subnet Mask, Routing and DNS info for the server you are setting up. Then select next and finish. At this point you should be able to ping the server and you can use the Network Tools application to verify traffic on the virtual network bridges. Following are very good references to various issues/techniques/concerns related to bonding.
- /communities/conversations/4028/suse-linux-enterprise-point-service-10-whats-new”>Bonding Multiple Network Interfaces on SLES 10
- /communities/conversations/2697/mountaccess-files-residing-xen-virtual-machines”>NIC Bonding with Xen Virtualization
- Bonding (Port Trunking)