SUSE Linux Enterprise Server 10 VM Network Configuration Examples

By: gldavis

July 28, 2006 12:00 am






Finding working examples of configuring a SLES10 VM server for different networking configurations can be difficult and time consuming. In this article three working tested examples are given.


SUSE Linux Enterprise Server 10 VM Network Configuration Examples

Here are some examples of some Network setups using Xen 3 in SUSE Linux Enterprise Server 10. A good overview of Xen Networking can be found at

Exercise 1 – Use multiple Network cards for load balancing or fault tolerance for Xen DomUs.

Follow the steps in this link to bind multiple NIC cards -> These steps apply to SLES10 as well as SLES9. You may want to set this up booting to the normal (non-xen) kernel. Once it is verified to work there then boot to the Xen kernel.
Your ifcfg-ethX or ifcfg-eth-id-<mac> files should look something like this (remark the other stuff out).


Your newly created ifcfg-bond0 should look something like this:

MTU= NAME='Bond0' 
BONDING_MODULE_OPTS='miimon=100 mode=0' # or mode=1 

Now when using a SLES10 VM the VM will virtually use “bond0” which of course is compromised of two NICS. The Virtual Machines will now have multiple interfaces to use, which is nice especially for fault tolerance. For a quick test, boot your VM, ping someone on the network, then unplug one of the interfaces bound to ifcfg-bond0. You should see the ping go out uninterrupted. Now try the same test with the other NIC cards, making sure at least one is bound.

Exercise 2 – Configure a SLES10 VM to use multiple NICS, multiple networks.

Set up your networks on Dom0.

In my case I setup bond0 (which has eth0 and eth2 enslaved from exercise above) with an ipaddress to the public network. I then had eth1 setup with a 10.x.x.x address going to a private network.

Create a wrapper script to setup two bridges to be used for two different networks. Create /etc/xen/scripts/my-network-script and insert the following:

dir=$(dirname "$0")
"$dir/network-bridge" "$@" vifnum=0 netdev=bond0
"$dir/network-bridge" "$@" vifnum=1 netdev=eth1

Note: You may not always need to add the netdev= argument here, but I found with using “bond0” the script didn’t work unless specifying this specifically. Also don’t forget to chmod +x my-network-script.

Modify /etc/xen/xend-config.sxp and have it use my-network-script.

# (network-script network-bridge)  # remark out this line
(network-script my-network-script)

Modify your SLES10 VM config file (found by default in /etc/xen/vm) and add entries for both bridges.

vif = [ 'mac=00:16:41:06:59:44,bridge=xenbr0','mac=00:16:41:55:59:44,bridge=xenbr1' ]

Reboot your VM Host (Dom0) or restart the bridges and Xen manually. Now start your VM (xm create -c DomU_name) or launch from the YaST2 Virtual Machine Manager (yast2 xen). Once your VM is up go your Vms lan settings. “yast2 lan” and you should see two NICS. Configure each with the proper Ipaddress for each network.

Note: To manually stop the xen bridges on your host use:

/etc/xen/scripts/network-bridge stop vifum=0 netdev=bond0 
/etc/xen/scripts/network-bridge stop vifum=1 netdev=eth1 

To restart use -> rcxend restart

Exercise 3 – Configure a VM to access a physical NIC directly.

This uses real drivers inside the VM and hides the NIC from the Dom0 host. This gives better lan performance.

Load the pciback module by typing -> modprobe pciback

Get the PCI ID for the hardware you will be using by typing > lspci

     IE – You may see 0000:02:06.0 for an e100 card.

Unbind the desired device intended for pciback. Go to /sys/bus/drivers and look for the folder with your driver, for example e100. Go in the folder and you will see the sym link for the device starting with your pci_id. Next ->

echo -n 0000:02:06.0 >  /sys/bus/pci/drivers/e100/unbind

Now that sym link you just checked out should be gone.

Now bind the device to pciback.

echo -n 0000:02:06.0 >  /sys/bus/pci/drivers/pciback/new_slot
echo -n 0000:02:06.0 >  /sys/bus/pci/drivers/pciback/bind

Go to the pciback directory and you should see a new sym link created.

Now configure your Virtual Machine (DomU) to use the device.

Add to your conf file -> pci=[‘0000:02:06.0’]


Specify it when loading your domU -> xm create -c vm1 pci=0000:02:06.0

Sample script to automate the above on bootup

# Get the pci id numbers using lspci, then set the pci1 to that number, 
# find the driver name of your NIC under /sys/bus/pci/drivers

# enable pciback
modprobe pciback

# hide the device from dom0 so pciback can take control
echo -n $pci1 >  /sys/bus/pci/drivers/$driver1/unbind
#echo -n $pci2 >  /sys/bus/pci/drivers/$driver2/unbind
sleep 1
# Give the device to pciback, give it a new slot then bind
echo -n $pci1 >  /sys/bus/pci/drivers/pciback/new_slot
#echo -n $pci2 >  /sys/bus/pci/drivers/pciback/new_slot
sleep 1
echo -n $pci1 >  /sys/bus/pci/drivers/pciback/bind
#echo -n $pci2 >  /sys/bus/pci/drivers/pciback/bind


SLES10 Host with SLES10 Virtual Machines

0 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 5 (0 votes, average: 0.00 out of 5)
You need to be a registered member to rate this post.

Categories: Uncategorized

Disclaimer: As with everything else in the SUSE Blog, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.