5.2 Configuring Resources

There are three types of RAs (Resource Agents) available with Heartbeat. First, there are legacy Heartbeat 1 scripts. Heartbeat can make use of LSB initialization scripts. Finally, Heartbeat has its own set of OCF (Open Cluster Framework) agents. This documentation concentrates on LSB scripts and OCF agents.

5.2.1 LSB Initialization Scripts

All LSB scripts are commonly found in the directory /etc/init.d. They must have several actions implemented, which are at least start, stop, restart, reload, force-reload, and status as explained in http://www.linux-foundation.org/spec/refspecs/LSB_1.3.0/gLSB/gLSB/iniscrptact.html.

The configuration of those services is not standardized. If you intend to use an LSB script with Heartbeat, make sure that you understand how the respective script is configured. Often you can find some documentation to this in the documentation of the respective package in /usr/share/doc/packages/<package_name>.

When used by Heartbeat, the service should not be touched by other means. This means that it should not be started or stopped on boot, reboot, or manually. However, if you want to check if the service is configured properly, start it manually, but make sure that it is stopped again before Heartbeat takes over.

Before using an LSB resource, make sure that the configuration of this resource is present and identical on all cluster nodes. The configuration is not managed by Heartbeat. You must take care of that yourself.

5.2.2 OCF Resource Agents

All OCF agents are located in /usr/lib/ocf/resource.d/heartbeat/. These are small programs that have a functionality similar to that of LSB scripts. However, the configuration is always done with environment variables. All OCF Resource Agents are required to have at least the actions start, stop, status, monitor, and meta-data. The meta-data action retrieves information about how to configure the agent. For example, if you want to know more about the IPaddr agent, use the command:

/usr/lib/ocf/resource.d/heartbeat/IPaddr meta-data

The output is lengthy information in a simple XML format. You can validate the output with the ra-api-1.dtd DTD. Basically this XML format has three sections—first several common descriptions, second all the available parameters, and last the available actions for this agent.

A typical parameter of a OCF RA as shown with the meta-data command looks like this:

<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
<resource-agent name="apache"> 
  <!-- Some elements omitted -->
  <parameter name="ip" unique="1" required="1">
    <longdesc lang="en">
The IPv4 address to be configured in dotted quad notation, for example
    <shortdesc lang="en">IPv4 address</shortdesc>
    <content type="string" default="" />

This is part of the IPaddr RA. The information about how to configure the parameter of this RA can be read as follows:

Root element for each output.

The name of the nvpair to configure is ip. This RA attribute is mandatory for the configuration.

The description of the parameter is available in a long and a short description tag.

The content of the value of this parameter is a string. There is no default value available for this resource.

Find a configuration example for this RA at Section 3.0, Setting Up a Simple Resource.

5.2.3 Example Configuration for an NFS Server

To set up the NFS server, three resources are needed: a file system resource, a drbd resource, and a group of an NFS server and an IP address. You can write each of the resource configurations to a separate file then load them to the cluster with cibadmin -C -o resources -x resource_configuration_file.

Setting Up a File System Resource

The filesystem resource is configured as an OCF primitive resource. It has the task to mount and unmount a device to a directory on start and stop requests. In this case, the device is /dev/drbd0 and the directory to use as mount point is /srv/failover. The file system used is reiserfs.

The configuration for this resource looks like the following:

<primitive id="filesystem_resource" class="ocf" provider="heartbeat" type="Filesystem">
  <instance_attributes id="ia-filesystem_1">
      <nvpair id="filesystem-nv-1" name="device" value="/dev/drbd0"/>
      <nvpair id="filesystem-nv-2" name="directory" value="/srv/failover"/>
      <nvpair id="filesystem-nv-3" name="fstype" value="reiserfs"/>

Configuring drbd

Before starting with the drbd Heartbeat configuration, set up a drbd device manually. Basically this is configuring drbd in /etc/drbd.conf and letting it synchronize. The exact procedure for configuring drbd is described in the Storage Administration Guide. For now, assume that you configured a resource r0 that may be accessed at the device /dev/drbd0 on both of your cluster nodes.

The drbd resource is an OCF master slave resource. This can be found in the description of the metadata of the drbd RA. However, more important is that there are the actions promote and demote in the actions section of the metadata. These are mandatory for master slave resources and commonly not available to other resources.

For Heartbeat, master slave resources may have multiple masters on different nodes. It is even possible to have a master and slave on the same node. Therefore, configure this resource in a way that there is exactly one master and one slave, each running on different nodes. Do this with the meta attributes of the master_slave resource. Master slave resources are a special kind of clone resources in Heartbeat. Every master and every slave counts as a clone.

<master_slave id="drbd_resource" ordered="false">
      <nvpair id="drbd-nv-1" name="clone_max" value="2"/> 
      <nvpair id="drbd-nv-2" name="clone_node_max" value="1"/>
      <nvpair id="drbd-nv-3" name="master_max" value="1"/>
      <nvpair id="drbd-nv-4" name="master_node_max" value="1"/>
      <nvpair id="drbd-nv-5" name="notify" value="yes"/>
  <primitive id="drbd_r0" class="ocf" provider="heartbeat" type="drbd">
    <instance_attributes id="ia-drbd_1">
        <nvpair id="drbd-nv-5" name="drbd_resource" value="r0"/>

The master element of this resource is master_slave. The complete resource is later accessed with the ID drbd_resource.

clone_max defines how many masters and slaves may be present in the cluster.

clone_node_max is the maximum number of clones (masters or slaves) that are allowed to run on a single. node.

master_max sets how many masters may be available in the cluster.

master_node_max is similar to clone_node_max and defines how many master instances may run on a single node.

notify is used to inform the cluster before and after a clone of the master_slave resource is stopped or started. This is used to reconfigure one of the clones to be a master of this resource.

The actually working RA inside this master slave resource is the drbd primitive.

The most important parameter this resource needs to know about is the name of the drbd resource to handle.

NFS Server and IP Address

To make the NFS server always available at the same IP address, use an additional IP address as well as the ones the machines use for their normal operation. This IP address is then assigned to the active NFS server in addition to the system's IP address.

The NFS server and the IP address of the NFS server should always be active on the same machine. In this case, the start sequence is not very important. They may even be started at the same time. These are the typical requirements for a group resource.

Before starting the Heartbeat RA configuration, configure the NFS server with YaST. Do not let the system start the NFS server. Just set up the configuration file. If you want to do that manually, see the manual page exports(5) (man 5 exports. The configuration file is /etc/exports. The NFS server is configured as an LSB resource.

Configure the IP address completely with the Heartbeat RA configuration. No additional modification is necessary in the system. The IP address RA is an OCF RA.

<group id="nfs_group">
  <primitive id="nfs_resource" class="lsb" type="nfsserver"/>
  <primitive id="ip_resource" class="ocf" provider="heartbeat" 
    <instance_attributes id="ia-ipaddr_1">
        <nvpair id="ipaddr-nv-1" name="ip" value=""/>

In a group resource, there may be several other resources. It must have an ID set.

The nfsserver is simple. It is just the LSB script for the NFS server. The service itself must be configured in the system.

The IPaddr OCF RA does not need any configuration in the system. It is just configured with the following instance_attributes.

There is only one mandatory instance attribute in the IPaddr RA. More possible configuration options are found in the metadata of the RA.