SUSE Conversations


Bonding Multiple Network Interfaces on SLES 10



By: gsanjeev

January 9, 2009 12:03 pm

Reads:15049

Comments:7

Rating:4.0

Author :
Sanjeev Gupta.

Overview:

This article has the steps to bond two network interfaces on a SLES 10 server to increase network throughput or to achieve NIC failover for high availability configurations.

  1. Find out whether the network card supports miimon, ethtool monitoring. This will determine the bonding module options for our configuration at other places.
    # ethtool eth0

    If you see something like below, you can use miimon mode.

    Settings for eth0
    Current message level: 0x000000ff (255)
    Link Detected: yes
  2. Configure your network cards in yast, and configure the first network card with the IP address and other network information that you want the bonded interface to have. Configure the other network card with a dummy IP addresses. As we won’t be using this dummy configuration anywhere so it doesn’t matter.
  3. Go to a terminal window and cd to /etc/sysconfig/network/ and make a copy of configuration file of the network card you just configured with the i.p address and other network information. Network configuration file-name starts with name ifcfg-eth-id*.We will be using this file as a template for our bonding configuration. Name of the destination file should be ifcfg-bond0 for the first bonded pair.
    # cd  /etc/sysconfig/network/
    # ls ifcfg-eth-id*
    # cp ifcfg-eth-id-<your Ist network card> ifcfg-bond0
    Note: You can use yast2 network configuration window to find out config file of your Ist network card. Note down the mac address of the first network card from yast and compare it with the names of ifcfg-eth-id-* files which have the mac address of card appended in their names.
  4. We will use the above created “ifcfg-bond0” as a template to start with. We need to discover the PCI bus IDs for the two ‘real’ NICs. At the prompt. For this cd go to:/etc/sysconfig/network and type “grep bus-pci ifcfg-eth-id*”.
    # cd  /etc/sysconfig/network
    #  grep bus-pci ifcfg-eth-id*

    You should see something like this.

    :_nm_name='bus-pci-0000:05:00.0'
    :_nm_name='bus-pci-0000:04:00.0'
  5. The above command gives us the addresses of the two physical network cards. Using this information, we can now modify our ifcfg-bond0 file to tell it the card details to use.Add in a section like this at the end of the ifcfg-bond0 file and save it.
    BONDING_MASTER=yes
    BONDING_SLAVE_0='bus-pci-0000:05:00.0'
    BONDING_SLAVE_1='bus-pci-0000:04:00.0'
  6. The next step is to specify to the system which driver to load when bond0 if referenced. To do this, open the file /etc/modprobe.conf.local.
    # vi /etc/modprobe.conf.local

    Add the following lines in the end of file and save it

    alias bond0 bonding
    options bonding miimon=100 mode=0 use_carrier=0

    The above specifies that when we see bond0 being referenced, we need to load the bonding driver with the parameters outlined. The ‘miimon=100′ value tells the driver to use mii monitoring, watching every 100 milliseconds for a link failure. The ‘mode’ parameter specifies one of four bonding policies.

    Note: The default is round-robin. Possible mode values are:
    0 Round-robin policy: Transmit in a sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
    1 Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.
    2 XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modula slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
    3 Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
  7. We need to now clear out the old ifcfg files that we don’t need. Just delete or move for backup the ifcfg-eth-id* files in the /etc/sysconfig/network directory and restart the network.
    # cd /etc/sysconfig/network
    # mv ifcfg-eth-id* /backup
    # rcnetwork restart
  8. If you have done the configuration correctly, you will see the bond0 interface appearing with the correct IP address and ‘as bonding master’, followed by two ‘enslaving eth’ lines. Verify the configuration using ifconfig and you’ll notice the MAC addresses for all the cards are identical, just as the IP addresses for eth0, eth1 and bond0 are identical.
  9. Test your configuration by plugging in network cables in both network cards and start ping to another machine on network. Now plug out one network cable from card-2 and verify that ping is still going on fine. Now plug in the cable to card-2 and remove the cable from card-1. If the ping still goes on fine, your configuration and setup is correct.
  10. If needed, you can repeat the process for a second bond, just modify the modprobe.conf.local with an ‘alias bond1 bonding’ line and carry on as before.
VN:F [1.9.22_1171]
Rating: 4.0/5 (2 votes cast)
Bonding Multiple Network Interfaces on SLES 10, 4.0 out of 5 based on 2 ratings

Tags: , ,
Categories: SUSE Linux Enterprise Server, Technical Solutions

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

7 Comments

  1. By:geeeerad

    Hi,

    I’ve tried this and doesn’t seem to work for me. I have a total of 3 nics in my server. 2 are identical broadcom gb nics, and have intel pro 1000. I tried it with intel and broadcom and broadcom and broadcom. Now the setup and everything works perfectly, however, when i unplug one nic it loses connection. It seems to only bond to one nic in my case. I waited for about 45 sec to see if the other one pics up, but never does. Am i not waiting long enough? I would like to use the Intel and one of the broadcoms if possible. Although at this rate, getting it to work either way would be great. Also tried the TID on novell’s site to do it through yast, and that does same thing. It’s a Dell Poweredge 2900. Brand new running for about a month now. Hoping someone can point me in the right direction.

    Thanks

    Rob

  2. By:gsanjeev

    It should work with two different NICs but it’s always good to use the same NIC’s for this configuration. Can you send me the config files and output of ifconfig. I would try to help you in this. Moreover can you provide the exact model of NIC’s and server?

  3. By:Conz

    Worked like a charm for me on a Proliant dl360 with broadcom nics.
    I did change the mode to 4 (lacp) to create a switch based link aggregation / etherchannel which was preferable in our case.
    Configuration besides the mode=4 is identical.
    Posted the aditional modes below for other people who might be looking into setting up ethernet bonding. Copied from http://www.linuxhorizon.ro/bonding.html

    mode=4 (802.3ad)
    IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.

    Pre-requisites:
    1. Ethtool support in the base drivers for retrieving
    the speed and duplex of each slave.
    2. A switch that supports IEEE 802.3ad Dynamic link
    aggregation.
    Most switches will require some type of configuration
    to enable 802.3ad mode.

    mode=5 (balance-tlb)
    Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

    Prerequisite:
    Ethtool support in the base drivers for retrieving the
    speed of each slave.

    mode=6 (balance-alb)
    Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.

    The most used are the first four mode types…

    Also you can use multiple bond interface but for that you must load the bonding module as many as you need.
    Presuming that you want two bond interface you must configure the /etc/modules.conf as follow:

    alias bond0 bonding
    options bond0 -o bond0 mode=0 miimon=100
    alias bond1 bonding
    options bond1 -o bond1 mode=1 miimon=100

  4. By:ihadzimahovic

    I have a serious problem and I’ll try to be brief about it.

    To achieve HA configuration on my two proliant 360 servers and as I have limited resources, I didn’t have spare storage, I took another HP G4 server and configure it as NSF server using Open SUSE11.
    Now, next problem came up. I order for Vmotion to work, I need 1GB link between storage(hp g4) and my two ESX servers, and I don’t have a GB switch available. So I decided to connect “storage” with cross linked cables directly to ESX servers
    I did it and then strange thing happened – on storage…only one NIC would work at a time.
    To avoid this problem I’ve decided to bond 2 nic cards on my storage.
    Now, I can ping both of my ESX servers and I managed to attach storage as NAS to ESX…at that moment everything look fine.
    Even HA would star up.
    But then I realized that speed between ESX and storage is way too slow.
    Pinging storage i get .130-.150ms response time, which is to me, impossibly slow for direct cable connection.

    Does anyone have any idea why after nic’s bonding I get so mediocre performance?

    It is impossible even to migrate single VM from one data store to newly added NFS.

    Please heeeeeeelp :)

  5. By:booktrunk

    That was really useful it worked first time for me.

    One thing, if you have already configured static routes for the individual cards you might need to go into /etc/sysconfig/network/routes and actually change the routes so that instead of the previous network card they have bond0 as the device they use.

    I did this and restarted rcnetwork and my static routes came back to life as well.

  6. By:Rachelsdad

    When bonding NICs under various flavors of SuSE, I tend to use YaST (on other distros, I just hack the config files). To do this under YaST:

    Leave the two (or more) NICs to be bonded (slaves) unconfigured, and add an interface.

    Configure the interface as a Bond Network. Assign the static IP and subnet mask to this interface, and set the default route.

    On the main configuration page for the interface, the two (or more) unconfigured NICs should be listed with checkboxes next to them to set them as bonding slaves. Check the desired NICs, and then set the Bond Driver Options as appropriate, e.g., from the drop down, select a preconfigured option, say, mode=balance-rr miimon=100, or type your options. Save this, and leave the physical NICs whcih are to be the slaves unconfigured.

    Finish your YaST network device configuration, and you should end up with a bonded pair (or trio, etc.).

  7. By:ericsi

    Hi,

    i ran the “ethtool eth0″ command but my output was missing the last 3 lines as per your screenshot. Any ideas what i should do now ?

    Thanks

Comment

RSS