Finding working examples of configuring a SLES10 VM server for different networking configurations can be difficult and time consuming. In this article three working tested examples are given.
SUSE Linux Enterprise Server 10 VM Network Configuration Examples
Here are some examples of some Network setups using Xen 3 in SUSE Linux Enterprise Server 10. A good overview of Xen Networking can be found at http://wiki.xensource.com/xenwiki/XenNetworking.
Exercise 1 – Use multiple Network cards for load balancing or fault tolerance for Xen DomUs.
Follow the steps in this link to bind multiple NIC cards -> http://docs.hp.com/en/B9903-90043/ch05s05.html?btnPrev=%AB%A0prev. These steps apply to SLES10 as well as SLES9. You may want to set this up booting to the normal (non-xen) kernel. Once it is verified to work there then boot to the Xen kernel.
Your ifcfg-ethX or ifcfg-eth-id-<mac> files should look something like this (remark the other stuff out).
BOOTPROTO='none' STARTMODE='auto' UNIQUE='mY_N.xmN71cA9FL7' USERCONTROL='no' _nm_name='bus-pci-0000:03:07.0' PREFIXLEN=
Your newly created ifcfg-bond0 should look something like this:
BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='192.168.0.3' MTU= NAME='Bond0' NETMASK='255.255.252.0' NETWORK= REMOTE_IPADDR= STARTMODE='auto' BONDING_MASTER='yes' BONDING_MODULE_OPTS='miimon=100 mode=0' # or mode=1 BONDING_SLAVE0='eth0' BONDING_SLAVE1='eth2'
Now when using a SLES10 VM the VM will virtually use “bond0” which of course is compromised of two NICS. The Virtual Machines will now have multiple interfaces to use, which is nice especially for fault tolerance. For a quick test, boot your VM, ping someone on the network, then unplug one of the interfaces bound to ifcfg-bond0. You should see the ping go out uninterrupted. Now try the same test with the other NIC cards, making sure at least one is bound.
Exercise 2 – Configure a SLES10 VM to use multiple NICS, multiple networks.
Set up your networks on Dom0.
In my case I setup bond0 (which has eth0 and eth2 enslaved from exercise above) with an ipaddress to the public network. I then had eth1 setup with a 10.x.x.x address going to a private network.
Create a wrapper script to setup two bridges to be used for two different networks. Create /etc/xen/scripts/my-network-script and insert the following:
#!/bin/sh dir=$(dirname "$0") "$dir/network-bridge" "$@" vifnum=0 netdev=bond0 "$dir/network-bridge" "$@" vifnum=1 netdev=eth1
Note: You may not always need to add the netdev= argument here, but I found with using “bond0” the script didn’t work unless specifying this specifically. Also don’t forget to chmod +x my-network-script.
Modify /etc/xen/xend-config.sxp and have it use my-network-script.
# (network-script network-bridge) # remark out this line (network-script my-network-script)
Modify your SLES10 VM config file (found by default in /etc/xen/vm) and add entries for both bridges.
vif = [ 'mac=00:16:41:06:59:44,bridge=xenbr0','mac=00:16:41:55:59:44,bridge=xenbr1' ]
Reboot your VM Host (Dom0) or restart the bridges and Xen manually. Now start your VM (xm create -c DomU_name) or launch from the YaST2 Virtual Machine Manager (yast2 xen). Once your VM is up go your Vms lan settings. “yast2 lan” and you should see two NICS. Configure each with the proper Ipaddress for each network.
Note: To manually stop the xen bridges on your host use:
/etc/xen/scripts/network-bridge stop vifum=0 netdev=bond0 /etc/xen/scripts/network-bridge stop vifum=1 netdev=eth1
To restart use -> rcxend restart
Exercise 3 – Configure a VM to access a physical NIC directly.
This uses real drivers inside the VM and hides the NIC from the Dom0 host. This gives better lan performance.
Load the pciback module by typing -> modprobe pciback
Get the PCI ID for the hardware you will be using by typing > lspci
IE – You may see 0000:02:06.0 for an e100 card.
Unbind the desired device intended for pciback. Go to /sys/bus/drivers and look for the folder with your driver, for example e100. Go in the folder and you will see the sym link for the device starting with your pci_id. Next ->
echo -n 0000:02:06.0 > /sys/bus/pci/drivers/e100/unbind
Now that sym link you just checked out should be gone.
Now bind the device to pciback.
echo -n 0000:02:06.0 > /sys/bus/pci/drivers/pciback/new_slot echo -n 0000:02:06.0 > /sys/bus/pci/drivers/pciback/bind
Go to the pciback directory and you should see a new sym link created.
Now configure your Virtual Machine (DomU) to use the device.
Add to your conf file -> pci=[‘0000:02:06.0’]
Specify it when loading your domU -> xm create -c vm1 pci=0000:02:06.0
Sample script to automate the above on bootup
#!/bin/bash # Get the pci id numbers using lspci, then set the pci1 to that number, pci1=0000:06:00.1 #pci2=0000:02:06.1 # find the driver name of your NIC under /sys/bus/pci/drivers driver1=e1000 #driver2=e1000 # enable pciback modprobe pciback # hide the device from dom0 so pciback can take control echo -n $pci1 > /sys/bus/pci/drivers/$driver1/unbind #echo -n $pci2 > /sys/bus/pci/drivers/$driver2/unbind sleep 1 # Give the device to pciback, give it a new slot then bind echo -n $pci1 > /sys/bus/pci/drivers/pciback/new_slot #echo -n $pci2 > /sys/bus/pci/drivers/pciback/new_slot sleep 1 echo -n $pci1 > /sys/bus/pci/drivers/pciback/bind #echo -n $pci2 > /sys/bus/pci/drivers/pciback/bind
SLES10 Host with SLES10 Virtual Machines