How To Configure DRBD on HA

This document (3299772) is provided subject to the disclaimer at the end of this document.

Environment


SUSE Linux Enterprise Server 10
SUSE Linux Enterprise Server 9
 

Situation

For those unfamiliar with DRBD or HA, please see the additional notes section for appropriate reading material. DRBD, or Distributed Replicated Block Device, is a method of replicating an entire block device over an existing network--many refer to it as network RAID-1. HA, or High-Availability is a rather nascent, but highly-scalable clustering solution.
 
The configuration outlined in this document merely configures two resources, drbd and filesystem (a mount point). If remote clients will be accessing these resources, particularly the filesystem resource, additional resources may be required for full functionality and redundancy. An IPaddress resource or filesharing daemon resource (I.e., Samba, NFS, or HTTP) may be required, but fall outside the scope of this document.

Although stonith is an integral piece of any fully-redundant high-availability solution, its configuration is also outside the scope of this document.

It is assumed that all ports, protocols, and transport methods are understood by the reader and appropriately addressed by such.

Resolution

Note:  This document applies ONLY to SLES 9 and early SLES 10 versions and is no longer maintained or supported.

 

For SLES9, the process is as follows:

  1. Use YaST for initial heartbeat setup less resources.

    1. YaST2 | System | High Availability. If you have a stonith device, this is the place to configure it.

  2. Manually partition the DRBD device.

    1. If you're not already familiar with DRBD, the devices used don't necessarily have to be of identical nomenclature ( /dev/sdb2 on one node, and/dev/sdc3on another), but they DO have to be of the same physical size (size and number of blocks).

  3. Manually create /etc/drbd.conf. An example of a basic configuration file would be:

#
# please have a a look at the example configuration file in
# /usr/share/doc/packages/drbd.conf
#

resource r0 {
protocol C;

on node1 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.0.10:7788;
meta-disk internal;
}

on node2 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.0.11:7788;
meta-disk internal;
}
}

  1. Copy /etc/drbd.conf to node2. Using scp or even sneakernet should suffice.

  2. Startup the DRBD service on both nodes by using either 'modprobe drbd' or 'rcdrbd start'.

  3. Ensure that the DRBD service will start upon boot by using 'chkconfig drbd on'.

  4. Run through the DRBD filesystem setup.

    1. In order to write to the DRBD device, one node needs to be primary. 'drbdadm primary r0' or if that reports errors or inconsistency, 'drbdsetup /dev/drbd0 primary –do-what-I-say' will accomplish this.

    2. Next, add the filesystem 'mkfs.ext2 /dev/sdb1' or the like.

    3. Wait for the nodes to synchronize, then change the node back to secondary. You can run 'rcdrbd status' to check the synchronization, then 'drbdadm secondary all' to change the node back to secondary.

  5. Modify heartbeat resource setup with YaST.

    1. YaST | System | High Availability again.

    2. It's really just adding two resources: 'drbddisk::r0', and 'Filesystem::/dev/drbd0::/data::ext3'. When complete, the screen should look something like:
       

     
  6. Copy /etc/ha.d/haresources to node2. Again, using scp or sneakernet will suffice.

  7. Restart heartbeat using 'rcheartbeat restart' on each node, and you should now have a working High Availability DRBDresource.


 

For SLES10, the process is as follows:

* Note: prior to configuration of DRBD on SLES10, please ensure that the drbd-kmp package is installed.

Please follow steps 1 through 7 as outlined above. Once configured on one node, the heartbeat configuration can be propagated to the other nodes in the cluster by using the '/usr/lib/heartbeat/ha_propagate' utility.

  1. Use the heartbeat gui '/usr/lib/hearbeat-gui/haclient.py' to create a resource or resource group. In order to login to the cluster, the user account used must be a member of the haclient system group and login for this user must be enabled (when heartbeat is installed, a system user, hacluster, is created, but login is disabled).

    1. First, create the resource group with a name that is expressive of the role this resource plays or the resources contained therein. Creating an ordered resource group makes sense in this case because we'll need the FileSystem resource to follow the DRBD resource.The required parameters are the same as for SLES9, we just use a different utility for configuration.

    2. Create the DRBD resource. In simple cases like this, we'll just use a native resource, drbddisk. As identified in the DRBD configuration, r0 is the only parameter needed.

    3. Create the Filesystem resource. The device, directory, and filesystem type are the only required parameters. However, should other filesystem or OCFS2 configuration options be needed, they can be configured as well.

  2. When fully configured, the resource group can be started through the GUI. However, once online, to stop or migrate a resource or resource group, the command-line crm_resource utility is suggested due to speed and usability. Please refer to the crm_resource man page for excellent examples and details.

Additional Information

By default, heartbeat on SLES10 merely logs general information to /var/log/messages. If detailed troubleshooting is required, /etc/ha_logd.cf needs to be configured. Please take a look at /usr/share/doc/packages/heartbeat/ha_logd.cf for configuration parameters and guidelines.
 
Some useful links:

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:3299772
  • Creation Date: 31-Mar-2008
  • Modified Date:04-Mar-2021
    • SUSE Linux Enterprise Server

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Join Our Community

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.


SUSE Customer Support Quick Reference Guide SUSE Technical Support Handbook Update Advisories
Support FAQ

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.

Go to Customer Center