SUSE Linux Enterprise High Availability Extension 15 GA

Release Notes

SUSE Linux Enterprise High Availability Extension is a suite of clustering technologies that enable enterprises to implement highly available Linux clusters and eliminate single points of failure. This document gives an overview of features and limitations of SUSE Linux Enterprise High Availability Extension. Some sections do not apply to a particular architecture or product, this is explicitly marked.

These release notes are updated periodically. The latest version is always available at https://www.suse.com/releasenotes. General documentation can be found at: https://www.suse.com/documentation.

Publication Date: 2019-03-29, Version: 15.0.20190329

1 SUSE Linux Enterprise High Availability Extension

SUSE Linux Enterprise High Availability Extension is an affordable, integrated suite of robust open source clustering technologies that enable enterprises to implement highly available Linux clusters and eliminate single points of failure.

Used with SUSE Linux Enterprise Server, it helps firms maintain business continuity, protect data integrity, and reduce unplanned downtime for their mission-critical Linux workloads.

SUSE Linux Enterprise High Availability Extension provides all of the essential monitoring, messaging, and cluster resource management functionality of proprietary third-party solutions, but at a more affordable price, making it accessible to a wider range of enterprises.

It is optimized to work with SUSE Linux Enterprise Server, and its tight integration ensures customers have the most robust, secure, and up to date high availability solution. Based on an innovative, highly flexible policy engine, it supports a wide range of clustering scenarios.

With static or stateless content, the High Availability cluster can be used without a cluster file system. This includes web-services with static content as well as printing systems or communication systems like proxies that do not need to recover data.

Finally, its open source license minimizes the risk of vendor lock-in, and its adherence to open standards encourages interoperability with industry standard tools and technologies.

2 Support Statement for SUSE Linux Enterprise High Availability Extension 15 GA

Support requires an appropriate subscription from SUSE. For more information, see https://www.suse.com/products/highavailability/.

A Geo Clustering for SUSE Linux Enterprise High Availability Extension subscription is needed to receive support and maintenance to run geographical clustering scenarios, including manual and automated setups.

Support for the DRBD storage replication is independent of the cluster scenario and included as part of the SUSE Linux Enterprise High Availability Extension product and does not require the addition of a Geo Clustering for SUSE Linux Enterprise High Availability Extension subscription.

General Support Statement

The following definitions apply:

  • L1: Installation and problem determination - technical support designed to provide compatibility information, installation and configuration assistance, usage support, on-going maintenance and basic troubleshooting. Level 1 Support is not intended to correct product defect errors.

  • L2: Reproduction of problem isolation - technical support designed to duplicate customer problems, isolate problem areas and potential issues, and provide resolution for problems not resolved by Level 1 Support.

  • L3: Code Debugging and problem resolution - technical support designed to resolve complex problems by engaging engineering in patch provision, resolution of product defects which have been identified by Level 2 Support.

SUSE will only support the usage of original (unchanged or not recompiled) packages.

3 What Is New?

SUSE Linux Enterprise High Availability Extension 15 introduces many innovative changes compared to SUSE Linux Enterprise High Availability Extension 12.

Make sure to also review the release notes for the base product, SUSE Linux Enterprise Server 15 GA which are published at https://www.suse.com/releasenotes/x86_64/SUSE-SLES/15 (these release notes are identical across all supported hardware architectures).

4 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE which are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are included for your convenience and give you a chance to test new technologies within an enterprise environment.

Whether a technology preview becomes a fully supported technology later depends on customer and market feedback. Technology previews can be dropped at any time and SUSE does not commit to providing a supported version of such technologies in the future.

Give your SUSE representative feedback, including your experience and use case.

4.1 SCSI Locking on Multipath With mpathpersist Resource Agent

In previous versions, the sg_persist resource agent only deals with the SCSI device directly and could not handle multipath devices.

As a technology preview, the mpathpersist resource agent now has a new functionality that allows for HA clusters that have a SCSI locking mechanism on top of multipath.

4.2 Container Bundles

Configuring containers as cluster resources often means configuring network and storage resources and using the remote node feature to monitor services running inside the container. Previously, there was no convenient way to configure these resources and features.

As a technology preview, SLE HA 15 GA ships with the feature container bundles. Container bundles allow managing Docker and Rkt containers together with associated functionality like network ranges, port mapping and storage mapping.

4.3 Clustering Support for MD RAID10 Devices

With SUSE Linux Enterprise High Availability Extension 15, RAID10 is included as technology preview, which enables locking and synchronization across multiple systems on the cluster, so all nodes in the cluster can access the MD devices simultaneously.

4.4 QDevice/QNetd Support for Corosync Quorum Device

For two-node clusters, quorum is normally not available. Therefore in the past, to resolve split-brain scenarios, fencing had to be used in two-node clusters.

As a technology preview, Corosync 2.4 in SLE HA 15 GA now includes qnetd. qnetd is a network quorum device that can be used for quorum that avoids the need for fencing. This device can be used as a third node only for quorum. This extra node is very lightweight and can therefore be shared among many two-node clusters.

5 Cluster

5.1 Support for Manual Tickets in Geo Clustering

Previously, granting tickets always required a quorum of the geo cluster. This meant that for a geo cluster with only 2 sites, it was not possible to grant a ticket if one site was lost. Hence, an arbitrator had to be used in all 2-site geo cluster setups.

With SLE HA 15 GA, you can now manually grant tickets to the healthy site if no automatic fail-over is required in a split-brain scenarios. Manual tickets are controlled only by administrator commands, which make them very user-predictable. However, you must make sure yourself that none of these tickets is granted to any other site at the same time.

The YaST Geo Cluster module now also allows configuring manual tickets.

6 High-Availability Tools

6.1 Probing Guest Nodes for Resource Status

With the new version, Pacemaker now also probes guest nodes for resource status. (Guest nodes are virtual machines that are created by resource agents, such as VirtualDomain and run the pacemaker_remote daemon). This change unifies the behaviors in regard of probes for all types of nodes, cluster nodes, remote nodes and guest nodes. This prevents concurrency violations.

However, if you have configured a location constraint with an -inf score to prevent a resource from running on a guest node, this can lead to problems. For example, if the software required by this resource is not installed on the guest node, probing for resource status might fail.

If you have configured a location constraint with an -inf score to keep a resource off a guest node, prevent Pacemaker from probing the resource on this node. To do so, set the resource-discovery property for this constraint to never. (Limiting resource discovery to allowed nodes in this way can also significantly boost performance if you are using Pacemaker Remote to scale a cluster to hundreds of nodes.)

For more information, see http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html-single/Pacemaker_Explained/index.html#_deciding_which_nodes_a_resource_can_run_on.

6.2 AutoYaST Support for Geo Clustering

Instead of installing manually, you can also use AutoYaST to clone the HA configuration of existing nodes. However in the past, the Geo Clustering extension had to be installed manually on all machines.

In SLE HA 15 GA, AutoYaST now has support for Geo Clustering for SUSE Linux Enterprise High Availability Extension as well. This even includes support for the new manual ticket mode (for more information, see Section 5.1, “Support for Manual Tickets in Geo Clustering”).

6.3 IPVS Has Been Moved From the HA Extension to the Base OS

IPVS (IP Virtual Server) implements transport-layer load balancing (Layer 4 LAN switching) in the Linux kernel. In SLES 12 and prior versions, IPVS was shipped only with the SUSE Linux Enterprise High Availability extension. However, IPVS is increasingly used outside the HA context, for example by Docker.

With SLES 15, IPVS has been moved into the base system. Other HA-related functionality that relies on IPVS remains part of the HA extension.

6.4 crm Report (hb_report) Configuration File

When creating reports, it may be desired to have certain options set each time. Currently, these have to be documented or maintained externally.

hb_report now alows for a configuration file in which an administrator can configure the report settings once, and have them apply to each report generated in the cluster from then on.

6.5 Hawk Data Files Installed to /usr/share/hawk

Previous versions of Hawk installed their data files into /srv/www. This is not compliant with the FHS standard for package's data file locations and does not allow for using a read-only root file system.

The Hawk data files are now installed to /usr/share/hawk, with some runtime data in /var/lib/hawk.

7 More Information and Feedback

  • Read the READMEs on the media.

  • Get detailed changelog information about a particular package from the RPM (where FILENAME is the name of the RPM):

    rpm --changelog -qp FILENAME.rpm
  • Check the ChangeLog file in the top level of CD1 for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of first medium of the SUSE Linux Enterprise High Availability Extension media. This directory includes a PDF version of the High Availability Guide.

  • https://www.suse.com/documentation contains additional or updated documentation for SUSE Linux Enterprise High Availability Extension 15 GA.

  • Visit https://www.suse.com/products/ for the latest product news from SUSE and https://www.suse.com/download-linux/source-code.html for additional information on the source code of SUSE Linux Enterprise products.

8 How to Obtain Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at https://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at https://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.

Print this page