How to tweak your SAP S/4 HANA – Enqueue Replication 2 High Availability Setup - Part 1 | SUSE Communities

How to tweak your SAP S/4 HANA – Enqueue Replication 2 High Availability Setup – Part 1

Share
Share

How to tweak your SAP S/4 HANA – Enqueue Replication 2 HA Setup

This blog will describe how you can tweak the Enqueue Replication 2 High Availability setup, and how you can benefit from of the new feature of Enqueue Replication 2 technology.

Up to now our SAP NetWeaver HA-CLU certification has been based on a two node cluster.

Two examples for using the new Enqueue Replication 2 features come to my mind:

  • Dividing cluster nodes over multiple data centers, e.g. on-premise / cloud
    infrastructure with three or more Availability Zones
  • Using diskless SBD feature for STONITH (Shoot The Other Node In The Head)

 

Change your Two Node Cluster into a Multi Node Cluster

As Fabian has mentioned in his blog: https://www.suse.com/c/suse-ha-sap-certified/ and described in the Best Practices for S/4 HANA – Enqueue Replication 2 High Availability Cluster https://www.suse.com/documentation/sles-for-sap-15/#bestpractice the new ENSA2 can retrieve the enqueue lock table over the network. This new feature now allows the possibility to setup a multi node cluster for the ASCS / ERS installation.

I will outline some facts which you have to take care of if you plan to extend your two node cluster to a multi node cluster.

  • OS installation new node
  • Patching the new and existing nodes (depending on installation of new node)
  • OS preparation new node
    • Installing packages
    • SAP preparation
    • Joining the cluster
  • Test the new cluster configuration

 

The following chapters will describe the extension of a two node cluster to a three node cluster. This procedure can we done multiple times to extend your installation to four or five or even more nodes :-).

The New Node

There is no need to use exact the same hardware as for ASCS and for ERS as long as your new system has enough resources to drive the ASCS or ERS. It could also be a mixture of bare-metal and/or virtualized  hardware infrastructure.

I would recommend using an automated installation which takes care that all your systems are set up identical. A documentation which describes the changes from this automated setup is very helpful. In our example we deploy our machines with an AutoYaST configuration file and run a post installation script which does the basic configuration.

Operating System Preparation New Node

Depending on your infrastructure and installation setup it might be necessary to install the latest updates
and patches on your existing nodes and the new one. In case you have frozen update repositories like
SUSE Manager provides,  this is very useful to add the new system to the same repositories.After the installation
and update procedure the new node must have the same patch level as your existing nodes.

It is recommended to install the latest available patches to guarantee the system stability and hardening.
Bug fixes and security patches help avoiding unplanned outages and make the system less vulnerable.

E.g use *zypper patch* or *zypper update* depending on the company rules.

There are multiple options possible:

# zypper patch --category security
## or
# zypper patch --severity important
## or
# zypper patch
## or
# zypper update

Please take care to use a valid DNS, time, network setup and patch level

  • Verify that DNS is working
# ping <hostname>
  • Set up *chrony* (this is best done with *yast2*) and enable it
# yast ntp-client
  • Check the network setting
# ip r

NOTE: We experienced some trouble if there is no valid or no default route set.

  • Verify patch level and installed packages

On the existing cluster nodes:

# rpm -qa | sort >rpm_node_1.log

On the new node:

# rpm -qa | sort >rpm_node_new.log

Copy the result to one node and compare them:

# vimdiff rpm_node_1.log rpm_node_new.log

NOTE: If there any differences fix them first before you proceed.

  • Installing required HA packages
# zypper in -t pattern ha_sles

Setup and configure the watchdog device on the new machine

Instead of deploying the software based watchdog, a hardware-based watchdog device should preferably be used. The following example uses the software device but the procedure can be easily adapted for a hardware device.

# modprobe softdog
# echo softdog > /etc/modules-load.d/watchdog.conf
# systemctl restart systemd-modules-load
# lsmod | egrep "(wd|dog|i6|iT|ibm)"
  • Install the package sap-suse-cluster-connector version >=3.1.0 from the SUSE repositories
# zypper in sap-suse-cluster-connector
  • SAP Preparation on the New Node

SAP provides with SWPM 2.0 PL a new option which can do all the necessary steps to prepare a host to be able to fit into an existing SAP system. This new option will help us to prepare the new host which can later run the ASCS or ERS in the cluster environment.

You need to create the directory structure for all needed SAP resources.

For the SWPM the following information are required:
– profile directory
– password for SAP System Administrator
– UID for sapadm

# cd <path to SWPM>
# ./sapinst

* SWPM product installation path:
** Installing SAP S/4 HANA Server 1809 -> SAP HANA DATABASE -> Installation -> Application Server ABAP -> High-Availability System -> Prepare Additional Host
* Use “/sapmnt/<SID>/profile” for the profile directory
* All passwords: <your secret password>
* UID for sapadm:<your UID>

  • Post Step Procedure after SAP Preparation Step

Add the user <sid>adm to the unix user group haclient.

# usermod -a -G haclient <sid>adm

Create the file /usr/sap/sapservices or copy it from one of your existing cluster nodes.

# LD_LIBRARY_PATH=/usr/sap/hostctrl/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
# /usr/sap/hostctrl/exe/sapstartsrv pf=/<path to profile>/<SID>_ERS<INO>_<virt. hostname name> -reg
# /usr/sap/hostctrl/exe/sapstartsrv pf=/<path to profile>/<SID>_ASCS<INO>_<virt. hostname name> -reg
## check the file
# cat /usr/sap/sapservices

or

# scp -p <node_ASCS>:/usr/sap/sapservices /usr/sap/

Joining the Cluster

Depending on the STONITH method used, please check that the new node has access it. This example is based on SBD. Check if the SBD device is available. If the existing cluster uses a different, supported STONITH mechanism check and verify that for the new cluster node.

# sleha-join -c <cluster node_1>

After the new node has joined the cluster the configuration must be adapted to the new situation. Double check that the join was successful and verify the /etc/corosync/corosync.conf

# grep votes -n2 /etc/corosync/corosync.conf

The values expected_votes and two_node should now look like this on all nodes:

  • expected_votes: 3
  • two_node: 0

 

Post Steps

Based on the new possibility for starting and moving the ASCS and ERS instances between the nodes there is a small adaption necessary. Modify the cluster configuration and set a new colocation rule with crm:

# crm configure delete col_sap_<SID>_no_both
# crm configure colocation ASCS<INO>_ERS<INO>_separated_<SID> -50000: grp_<SID>_ERS<INO> grp_<SID>_ASCS<INO>

NOTE: Replace <SID> and <INO> with your matching values.

Test the New Cluster Configuration

Cluster tests highly recommended to verify that the new configuration works as expected.
We have a list of tests in our basic setup for the two node cluster Best Practice:

S/4 HANA – Enqueue Replication 2 High Availability Cluster https://www.suse.com/documentation/sles-for-sap-15/#bestpractice

Finish

Congratulation, you have successfully extended your two node cluster to a three nodes cluster. This procedure can be adapted to build bigger clusters than three. We will adapt the official documentation as soon as possible and try to give an overview of the different cluster configurations with even and odd numbers of nodes.

 

This is Fabian Herschel and Bernd Schubert blogging live from the SAP LinuxLab St. Leon-Rot, Germany. Please also read our other blogs about #TowardsZeroDowntime.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet

Avatar photo
5,122 views