Load Balancing SMT servers with a SLES11 SP1 HAE Cluster

By: rhasleton

March 11, 2011 11:26 am





Load Balancing SMT Servers with a SLES11 SP1 HAE Cluster

My setup is 3 servers running in VMware Workstation, using an iSCSI backed disk for the shared storage. At the end of following this guide, you should have a 3-node HAE cluster, with 2 of the nodes running SMT and the 3rd node load balancing client requests between the 2 nodes.

1. Install HAE onto 3 SLES11 SP1 servers.
      – Use the documentation

2. Install SMT onto 2 of those servers.
      – Use the documentation

3. Create an SBD resource on a shared disk (mine is iSCSI, /dev/sdb1)
      – sbd -d /dev/sdb1 create
      – sbd -d /dev/sdb1 dump (to verify)
      – add to /etc/init.d/boot.local -> modprobe softdog
      – create and edit /etc/sysconfig/sbd
            – SBD_DEVICE=”/dev/sdb1″
            – SBD_OPTS=”-W”
      – From a terminal, open the Cluster GUI
            – passwd hacluster (change the password)
            – crm_gui &
      – In crm_gui, create a primitive resource (stonith:external_sbd) and point it to /dev/sdb1

4. Create an ocfs2 shared mount between the 2 SMT servers.
      – Configure a cloned group and place dlm and o2cb resources inside of it.
            – In crm_gui, highlight resources, add clone (name the clone “base-services”), then group, then primitve dlm resource (ocf:pacemaker:controld)
            – Add another primitive o2cb resource (ocf:ocfs2:o2cb)
      – Create restraint so base-services cannot load on node3
            – Highlight contraints and add a “Location Contraint”
            – Call it “Base-Services-not-on-node3″
            – Resource = base-services
            – Score = -INIFINITY (means to never load on this node)
            – Node = node3
      – Start base-services group and make sure they start up.
      – mkfs.ocfs2 -N 8 /dev/sdb2
      – Add TCP port 21064 in your firewall on both nodes (otherwise only 1 node will be able to mount the device).
      – Create an /smt directory just off root (mkdir /smt) on nodes 1 and 2.
      – Go edit your base-services clone, and add “interleave = true” in the meta attributes, as we will now be adding a filesystem.
      – Now edit the base-group and add another primitive
            – Create the ocfs2 filesystem primitive (ocf:heartbeat:Filesystem). I called mine ‘fs’.
            – Device is /dev/sdb2
            – Directory is /smt
            – fstype is ocfs2
      – Hit “OK” three times to get back to the crm_gui management screen.
      – Notice that now you should have an fs primitive running on Nodes 1 & 2.
      – Type ‘mount’ on both nodes to verify that they have it mounted.

5. Change SMT to mirror repos to the shared disk.
      – Edit the /etc/smt.conf file and change the following:
            – MirrorTo=/srv/www/htdocs to MirrorTo=/smt
            – MirrorSRC=true to MirrorSRC=false (only change this if you don’t want to mirror src packages)

6. Fix Apache to use “FollowSymLinks” to the shared SMT disk.
      – Edit the vi /etc/apache2/default-server.conf
      – In the section
            – Edit “Options None” to “Options FollowSymLinks”

7. Create an LVS resource with a virtual IP in the cluster.
      – Create a new group, LVS
      – Create a primitive for the virtual IP address (ocf:heartbeat:IPaddr2) called VirtIP.
            – ip =
            – Add lvs_support = true (if you are going to be using direct routing method, not NAT or tunneling), otherwise, leave this off.
      – Create a primitive for the IP Load Balancer (heartbeat:ldirectord) called Director.
            – Go to “Operations” tab, remove the “monitor” op and add a “status” op (interval = 0, timeout = 15)
      – Create a constraint so it only runs on node3
            – Highlight Constraints and add another “Location Constraint”
            – Call it LVS-not-on-node1
            – Resource = LVS
            – Score = -INIFINTY
            – Node = node1
      – Repeat for node2. You only need to do this if you don’t want the VirtIP and ldirectord running on these nodes.

8. Set up the IP Load Balancer
      – Link to documentation:
      – Start up ‘yast2 iplb’ from a terminal, or go into YaST -> Other -> IPLB -> “Global Configuration”
            – Check Interval = 20 (how often to check if the real server is “up” in seconds)
            – Check Timeout = 5 (how long to wait if a failure occurs before checking again)
            – Failure Count = 2 (how many times to check before removing the real server from the available list)
[The above parameters mean that every 20 seconds, the IP load balancer will check the real servers if they are up. If a response is not received, that is 1 failure. Then it waits 5 seconds to check again. If it fails again, that is 2 failures and the IP load balancing service will remove it from the list of being available.]
            – Set auto-reload = yes
            – Set Quiescent = no
            – The “Help” button will explain all the parameters in greater detail.

       – Change to the “Virtual Server Configuration” tab
             – Click the “Add” button
            – Enter in the Virtual Server box
             – For “Real Servers”, click the “Add” button and enter this:
                  – gate
                   – gate
            – The 2nd argument for the “Real Servers” can be gate, ipip, or masq. They are the method of packet forwarding that the LVS
director will use. They mean direct forwarding, tunneling and NAT, respectively.
            – Check Type = Negotiate
             – Service = http
            – Request = test.html
            – Receive = still alive
             – Scheduler = wlc (this is default according to the man page). man ipvsadm will give more info on each scheduler.
            – Protocol = tcp
            – Failback = (in case all your real servers are down)

      – Do the same thing as above, but for the https protocol. Change where it reads http to https and 80 to 443.
      – Create a test.html in /srv/www/htdocs on node1 and node2.
            – echo “still alive” > /srv/www/htdocs/test.html
      – The configuration file for the logical virtual server (Load Balancer) is /etc/ha.d/ and will look similar to this:

autoreload = yes
checkinterval = 20
checktimeout = 5
failurecount = 2
quiescent = no
virtual =
	checktype = negotiate
	fallback =
	protocol = tcp
	real = gate
	real = gate
	receive = "still alive"
	request = "test.html"
	scheduler = wlc
	service = https
virtual =
	checktype = negotiate
	fallback =
	protocol = tcp
	real = gate
	real = gate
	receive = "still alive"
	request = "test.html"
	scheduler = wlc
	service = http

9. Change packet forwarding on LVS server.
      – On node3, set net.ipv4.ip_forward = 1 in the /etc/sysctl.conf
      – Run sysctl -p

10. Add dummy virtual IP on the backend Apache/SMT servers and ignore ARP requests
      – On nodes1 and 2, add these parameters in /etc/sysctl.conf:
            – net.ipv4.conf.all.arp_ignore = 1
             – net.ipv4.conf.eth0.arp_ignore = 1
             – net.ipv4.conf.all.arp_announce = 2
            – net.ipv4.conf.eth0.arp_announce = 2
            – run ‘sysctl -p’
      – Edit the /etc/init.d/boot.local and add this line:
            – ifconfig lo:0 netmask (a reboot is needed for this to take effect)

11. Create new CA and server certs and share it between nodes.
      – See TID 7006024 for information
      – Delete and re-create Root CA
            – On node1, mv /var/lib/CAM/YaST_Default_CA /tmp
            – yast2 ca_mgm
            – Click “Create Root CA”
             – Call it whatever you want. Default name is YaST_Default_CA.
            – Common name should be set to YaST_Default_CA.
            – Add an email address for the administrator.
            – Fill out other options if desired, then click “Next”.
            – Give it a password, then click “Advanced Options”.
            – Highlight “Subject Alt Name”, then add at least the DNS address of the Virtual IP address set above.
            – Click “Next” then “Create”.
      – Create server certificate
            – Click “Enter CA” and enter the password you just used above.
            – Go to the “Certificates” tab and select “Add Server Certificate”.
            – Common Name is the DNS name of the virtual IP address.
            – Add an email address for the administrator.
            – Fill out any other desired options, then click “Next”.
            – You can now use the CA password or another for the certificate, then click “Advanced Options”.
            – Highlight “Subject Alt Name”, then add at least the DNS address of the physical node you are working on.
            – Click “Ok”, then “Next” then “Create”
      – Export the certificate as a common server certificate, so that apache will now use it.
            – Select the “Export”, then “Export as common server certificate”.
            – Enter the password that was chosen for the server certificate.
            – A message “Certificate has been written as common server certificate” will be displayed.
      – Export the CA certificate to the smt.crt file
            – In the YaST2 CA management module change to the “Description” tab and select “Advanced / Export to File”.
            – Select “Only the Certificate in PEM Format” and enter “/srv/www/htdocs/smt.crt” as the filename.
            – Select “Ok” to export the file.
            – Restart smt (rcsmt restart).
      – Export the CA to use on other SMT servers
            – Go to “Description” tab on the CA an select “Advanced, Export to File”
            – Choose “Certificate and the Key Encrypted in PEM Format”
            – Give it a file name: /tmp/ca.pem and click “OK”
            – Copy that file to your other SMT nodes (scp /tmp/ca.pem node2:/tmp/)
            – On node2, mv /var/lib/CAM/YaST_Default_CA /tmp
            – yast2 ca_mgm
            – Select “Import CA”
                  – CA name is YaST_Default_CA
                  – “Path of CA Certificate” and “Path of Key” will be the same: /tmp/ca.pem
                  – Enter the password and press “OK”
            – Now follow the same 3 steps above for creating a server certificate and exporting it for apache and SMT on this node.

12. Modify /etc/my.cnf file for MySQL Master-to-Master replication setup
      – Sample my.cnf
            # Replication Parameters (place under the [mysqld] section)
            binlog-do-db = smt
            log-slave-updates #for circular replication
            replicate-same-server-id = 0 #To ensure the slave thread doesn’t try to write updates that this node has produced
            auto_increment_increment = 10 #set to number of nodes you have or likely to have
            auto_increment_offset = 1 #set to same as server-id
            master-host = #ip address of master server
            master-user = slave_user #user to be used for replication
            master-password = novell #password created for slave_user
            replicate-do-db = smt #only replicate this db (may not put this in final revision)
            relay-log #for circular replication
            relay-log-index #for circular replication
            slave-net-timeout = 30 #decrease timeout from default of 1 hour
            master-connect-retry = 30 #to retry every 30 seconds if connection is broken
            server-id = 1 #increment per node
      – Restart SMT, ‘rcsmt restart’

13. Modify MySQL database for replication
      – Add MySQL Server to your firewall rules (or tcp port 3306)
      – From both servers log into mysql
            – mysql -u root -p
            – GRANT REPLICATION SLAVE ON *.* TO ‘slave_user’@’%’ IDENTIFIED BY ‘some_password'; (put your own password in here)
            – FLUSH PRIVILEGES;
            – USE smt;
            – show master status;

      The output should look something like this:

| File                               | Position     | Binlog_do_db    | Binlog_ignore_db  |
| mysqld-bin.000001      | 98             | smt	            |                               |
1 row in set (0.00 sec)

      – Write this information down, as you will need it shortly.

14. After that is done, on each slave node running SMT, go into the MySQL database and follow these instructions
      – mysql -u root -p
      – Enter password:
      – SLAVE STOP;
      – CHANGE MASTER TO MASTER_HOST=’′, MASTER_USER=’slave_user’, MASTER_PASSWORD='<some_password>’,
MASTER_LOG_FILE=’mysqld-bin.000001′, MASTER_LOG_POS=98; (notice the parameters used from above).
            – The MASTER_LOG_FILE and LOG_POS come from the other node. If you are typing this on node1, use the
info gathered from node2 when you typed “show master status;”
      – SLAVE START;
            – With this, hopefully you see “Waiting for master to send event” at the top. Good!

15. Add a repository to one of the nodes, and see if replicates over to the other node.
      – Use ‘yast2 smt’ or the command line to add a repository.
      – Check ‘smt-catalogs -o’ on nodes 1 and 2. They should both have the new repo if replication is working.

16. Create softlink to where SMT repos live
      – Start the smt mirror process for the repo you just enabled (smt-mirror -d)
      – Create the soft links necessary for clients to be able to access the repositories:
            – cd /srv/www/htdocs/repo
            – ln -s /smt/repo/\$RCE ‘$RCE’
            – chown -h smt.www \$RCE
      – Also, verify that the smt user is the same uid on both nodes, otherwise the rights will not work correctly (id smt).

17. Test the load balancer
      – On node3, run ipvsadm -L. You should see output similar to this:

node3:~ # ipvsadm -L
		IP Virtual Server version 1.2.1 (size=4096)
		Prot LocalAddress:Port Scheduler Flags
 		 -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
		TCP wlc
 		  -> node1:http                   Route   1      0          0         
		  -> node2:http                   Route   1      0          0         
		TCP wlc
		  -> node1:https                  Route   1      0          0         
		  -> node2:https                  Route   1      0          0  

      – You can also use ipvsadm -Ln to show it with IP Addresses.
      – Use ‘ipvsadm -Lc or -Lcn to see the actual requests coming in (watch -n 1 ‘ipvsadm -Lcn’)

In a true clustered environment, a 4th node would be added to ensure that if node3 went down, the LVS resource would have somewhere to fail over. You can also add more SMT nodes to the mix, and create a truer circular replication with MySQL. Troubleshooting errors on the mysql database are outside the scope of this article.

0 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 50 votes, average: 0.00 out of 5 (0 votes, average: 0.00 out of 5)
You need to be a registered member to rate this post.

Tags: , ,
Categories: SUSE Linux Enterprise High Availability Extension, Technical Solutions

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

1 Comment

  1. By:loadbalancer

    Maybe you should try an iptables REDIRECT method instead of the loopback adapter method?
    Checkout page 13 of the deployment guide for web proxies