SUSE Linux Enterprise Server 12 SP3

Release Notes

This document provides guidance and an overview to high level general features and updates for SUSE Linux Enterprise Server 12 SP3. Besides architecture or product-specific information, it also describes the capabilities and limitations of SUSE Linux Enterprise Server 12 SP3.

General documentation can be found at: http://www.suse.com/documentation/sles-12/.

Publication Date: 2017-10-23, Version: 12.3.20171020
1 About the Release Notes
2 SUSE Linux Enterprise Server
2.1 Interoperability and Hardware Support
2.2 Support and Life Cycle
2.3 What Is New?
2.4 Documentation and Other Information
2.5 How to Obtain Source Code
2.6 Support Statement for SUSE Linux Enterprise Server
2.7 Software Requiring Specific Contracts
2.8 Technology Previews
2.9 Modules, Extensions, and Related Products
2.10 Security, Standards, and Certification
3 Installation and Upgrade
3.1 Installation
3.2 Upgrade-Related Notes
3.3 For More Information
4 Architecture Independent Information
4.1 Kernel
4.2 Kernel Modules
4.3 Security
4.4 Networking
4.5 Systems Management
4.6 Storage
4.7 Virtualization
4.8 Miscellaneous
5 AMD64/Intel 64 (x86_64) Specific Information
5.1 System and Vendor Specific Information
6 POWER (ppc64le) Specific Information
6.1 QEMU-virtualized PReP Partition
6.2 kdump: Shorter Time to Filter and Save /proc/vmcore
6.3 Parameter crashkernel Is Now Used for fadump Memory Reservation
6.4 Encryption Improvements Using Hardware Optimizations
6.5 Ceph Client Support on z Systems and POWER
6.6 Memory Reservation Support for fadump in YaST
7 IBM z Systems (s390x) Specific Information
7.1 Virtualization
7.2 Storage
7.3 Network
7.4 Security
7.5 Reliability, Availability, Serviceability (RAS)
7.6 Performance
8 ARM 64-Bit (AArch64) Specific Information
8.1 AppliedMicro X-C1 Server Development Platform (Mustang) Firmware Requirements
8.2 New System-on-Chip Driver Enablement
8.3 Support for OpenDataPlane on Cavium ThunderX and Octeon TX Platforms
8.4 KVM on AArch64
8.5 Toolchain Module Enabled in Default Installation
9 Packages and Functionality Changes
9.1 Updated Packages
9.2 Removed and Deprecated Functionality
9.3 Changes in Packaging and Delivery
9.4 Modules
10 Technical Information
10.1 Kernel Limits
10.2 KVM Limits
10.3 Xen Limits
10.4 File Systems
11 Legal Notices
12 Colophon

1 About the Release Notes

These Release Notes are identical across all architectures, and the most recent version is always available online at http://www.suse.com/releasenotes/.

Some entries may be listed twice, if they are important and belong to more than one section.

Release notes usually only list changes that happened between two subsequent releases. Certain important entries from the release notes documents of previous product versions are repeated. To make these entries easier to identify, they contain a note to that effect.

However, repeated entries are provided as a courtesy only. Therefore, if you are skipping one or more service packs, check the release notes of the skipped service packs as well. If you are only reading the release notes of the current release, you could miss important changes.

2 SUSE Linux Enterprise Server

SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. It is an affordable, interoperable, and manageable open source foundation. With it, enterprises can cost-effectively deliver core business services, enable secure networks, and simplify the management of their heterogeneous IT infrastructure, maximizing efficiency and value.

The only enterprise Linux recommended by Microsoft and SAP, SUSE Linux Enterprise Server is optimized to deliver high-performance mission-critical services, as well as edge of network, and web infrastructure workloads.

2.1 Interoperability and Hardware Support

Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix as well as Windows environments, supports open standard interfaces for systems management, and has been certified for IPv6 compatibility.

This modular, general purpose operating system runs on four processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering.

SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription, making it the perfect guest operating system for virtual computing.

2.2 Support and Life Cycle

SUSE Linux Enterprise Server is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.

SUSE Linux Enterprise Server 12 has a 13-year life cycle, with 10 years of General Support and 3 years of Extended Support. The current version (SP3) will be fully maintained and supported until 6 months after the release of SUSE Linux Enterprise Server 12 SP4.

If you need additional time to design, validate and test your upgrade plans, Long Term Service Pack Support can extend the support you get an additional 12 to 36 months in twelve month increments, providing a total of 3 to 5 years of support on any given service pack.

For more information, check our Support Policy page https://www.suse.com/support/policy.html or the Long Term Service Pack Support Page https://www.suse.com/support/programs/long-term-service-pack-support.html.

2.3 What Is New?

SUSE Linux Enterprise Server 12 introduces many innovative changes compared to SUSE Linux Enterprise Server 11. Here are some of the highlights:

  • Robustness on administrative errors and improved management capabilities with full system rollback based on Btrfs as the default file system for the operating system partition and the Snapper technology of SUSE.

  • An overhaul of the installer introduces a new workflow that allows you to register your system and receive all available maintenance updates as part of the installation.

  • SUSE Linux Enterprise Server Modules offer a choice of supplemental packages, ranging from tools for Web Development and Scripting, through a Cloud Management module, all the way to a sneak preview of upcoming management tooling called Advanced Systems Management. Modules are part of your SUSE Linux Enterprise Server subscription, are technically delivered as online repositories, and differ from the base of SUSE Linux Enterprise Server only by their life cycle. For more information about modules, see Section 2.9.1, “Available Modules”.

  • New core technologies like systemd (replacing the time-honored System V-based init process) and Wicked (introducing a modern, dynamic network configuration infrastructure).

  • The open-source database system MariaDB is fully supported now.

  • Support for open-vm-tools together with VMware for better integration into VMware-based hypervisor environments.

  • Linux Containers are integrated into the virtualization management infrastructure (libvirt). Docker is provided as a fully supported technology. For more details, see https://www.suse.com/promo/sle/docker/.

  • Support for the AArch64 architecture (64-bit ARMv8) and the 64-bit Little-Endian variant of the IBM POWER architecture. Additionally, we continue to support the Intel 64/AMD64 and IBM z Systems architectures.

  • GNOME 3.20 gives users a modern desktop environment with a choice of several different look and feel options, including a special SUSE Linux Enterprise Classic mode for easier migration from earlier SUSE Linux Enterprise Desktop environments.

  • For users wishing to use the full range of productivity applications of a Desktop with their SUSE Linux Enterprise Server, we are now offering SUSE Linux Enterprise Workstation Extension (requires a SUSE Linux Enterprise Desktop subscription).

  • Integration with the new SUSE Customer Center, the new central web portal from SUSE to manage Subscriptions, Entitlements, and provide access to Support.

If you are upgrading from a previous SUSE Linux Enterprise Server release, you should review at least the following sections:

2.4 Documentation and Other Information

2.4.1 Available on the Product Media

  • Read the READMEs on the media.

  • Get the detailed change log information about a particular package from the RPM (where <FILENAME>.rpm is the name of the RPM):

    rpm --changelog -qp <FILENAME>.rpm
  • Check the ChangeLog file in the top level of the media for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of the media of SUSE Linux Enterprise Server 12 SP3. This directory includes PDF versions of the SUSE Linux Enterprise Server 12 SP3 Installation Quick Start and Deployment Guides. Documentation (if installed) is available below the /usr/share/doc/ directory of an installed system.

2.4.2 Externally Provided Documentation

2.5 How to Obtain Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.

2.6 Support Statement for SUSE Linux Enterprise Server

To receive support, you need an appropriate subscription with SUSE. For more information, see http://www.suse.com/products/server/services-and-support/.

The following definitions apply:

L1

Problem determination, which means technical support designed to provide compatibility information, usage support, ongoing maintenance, information gathering and basic troubleshooting using available documentation.

L2

Problem isolation, which means technical support designed to analyze data, reproduce customer problems, isolate problem area and provide a resolution for problems not resolved by Level 1 or alternatively prepare for Level 3.

L3

Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.

For contracted customers and partners, SUSE Linux Enterprise Server 12 SP3 and its Modules are delivered with L3 support for all packages, except the following:

SUSE will only support the usage of original (that is, unchanged and un-recompiled) packages.

2.7 Software Requiring Specific Contracts

The following packages require additional support contracts to be obtained by the customer in order to receive full support:

  • PostgreSQL Database

2.8 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE which are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are included for your convenience and give you a chance to test new technologies within an enterprise environment.

Whether a technology preview becomes a fully supported technology later depends on customer and market feedback. Technology previews can be dropped at any time and SUSE does not commit to providing a supported version of such technologies in the future.

Give your SUSE representative feedback, including your experience and use case.

2.8.1 Technology Previews for All Architectures

2.8.1.1 Support for KVM Guests Using NVDIMM Devices

As a technology preview, KVM guests can now use NVDIMM devices.

2.8.1.2 QEMU: NVDIMM and Persistent Memory

As a technical preview, QEMU now supports NVDIMM. To use NVDIMM, create a memory device with model=nvdimm. This functionality can be used directly with the qemu command line tool or using libvirt. However, this functionality is not yet exposed through virt-manager.

NVDIMM supports two access modes:

  • PMEM: NVDIMM is mapped into the CPU's address space, so that the CPU can directly access it like normal memory

  • BLK: NVDIMM is used as a block device, this avoids occupying the CPU address space.

2.8.1.3 KVM Nested Virtualization

KVM Nested Virtualization is available in SLE 12 as a technology preview. For more information about nested virtualization, see nested-vmx.txt (https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt).

2.8.2 Technology Previews for IBM z Systems (s390x)

2.8.2.1 Exploitation of Shared Memory Communications

As a technology preview, SLES 12 SP3 enables communication through shared memory segments with the 10 GB Ethernet RoCE card:

  • Support for the networking card itself is included in the kernel.

  • The package smc-tools contains additional user-space tools.

This technology should only be used in a trusted network infrastructure.

2.8.3 Technology Previews for POWER (ppc64le)

2.8.3.1 Device Driver ibmvnic

vNIC (Virtual Network Interface Controller) is a new PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a efficient high-performance technology. When combined with SR-IOV NIC, it provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead resulting in lower latencies and fewer server resources (CPU, memory) required for network virtualization.

This driver is a Technology Preview in SLES 12 SP3.

2.8.3.2 Support for KVM

With SLES 12 SP3, KVM is now available as a technology preview on OpenPower S822LC systems running OPAL firmware.

2.8.3.3 Inclusion of IBM TPM 2.0 Stack

IBM has developed a TPM 2.0 TSS stack that can exist and be used in parallel to the Intel TPM 2.0 stack.

It is not clear at this time which of them will be the preferable solution on all TPM supporting platforms.

The general guideline of SUSE Linux Enterprise is having one preferred tool to do the job.

The IBM TPM 2.0 stack is shipped as a Technology Preview in addition to the supported Intel TPM 2.0 stack.

2.9 Modules, Extensions, and Related Products

This section comprises information about modules and extensions for SUSE Linux Enterprise Server 12 SP3. Modules and extensions add parts or functionality to the system.

2.9.1 Available Modules

Modules are fully supported parts of SUSE Linux Enterprise Server with a different life cycle and update timeline. They are a set of packages, have a clearly defined scope and are delivered via an online channel only. Release notes for modules are contained in this document, see Section 9.4, “Modules”.

The following modules are available for SUSE Linux Enterprise Server 12 SP3:

NameContentLife Cycle
Advanced Systems Management ModuleCFEngine, Puppet, Salt and the Machinery toolFrequent releases
Certifications Module* FIPS 140-2 certification-specific packagesCertification-dependant
Containers ModuleDocker, tools, prepackaged imagesFrequent releases
HPC ModuleTools and libraries related to High Performance Computing (HPC)Frequent releases
Legacy Module* Sendmail, old IMAP stack, old Java, …Until September/October 2017 (except for ksh)
Public Cloud ModulePublic cloud initialization code and toolsFrequent releases
Toolchain ModuleGNU Compiler Collection (GCC)Yearly delivery
Web and Scripting ModulePHP, Python, Ruby on Rails3 years, ~18 months overlap

* Module is not available for the AArch64 architecture.

For more information about the life cycle of packages contained in modules, see https://scc.suse.com/docs/lifecycle/sle/12/modules.

2.9.2 Available Extensions

Extensions add extra functionality to the system and require their own registration key, usually at additional cost. Extensions are delivered via an online channel or physical media. In many cases, extensions have their own release notes documents that are available from https://www.suse.com/releasenotes/.

The following extensions are available for SUSE Linux Enterprise Server 12 SP3:

Additionally, there are the following extensions which are not covered by SUSE support agreements, available at no additional cost and without an extra registration key:

2.9.3 Derived and Related Products

This sections lists derived and related products. In many cases, these products have their own release notes documents that are available from https://www.suse.com/releasenotes/.

2.10 Security, Standards, and Certification

SUSE Linux Enterprise Server 12 SP3 has been submitted to the certification bodies for:

For more information about certification, see https://www.suse.com/security/certificates.html.

3 Installation and Upgrade

SUSE Linux Enterprise Server can be deployed in several ways:

  • Physical machine

  • Virtual host

  • Virtual machine

  • System containers

  • Application containers

3.1 Installation

This section includes information related to the initial installation of SUSE Linux Enterprise Server 12 SP3. For information about installing, see Deployment Guide at https://www.suse.com/documentation/sles-12/book_sle_deployment/data/book_sle_deployment.html.

3.1.1 FCoE Storage Does Not Work with Cavium or QLogic Storage Controllers with FCoE Offload

On a default installation of SLES 12 SP3, there is no support for FCoE storage on systems that use Cavium or QLogic storage controllers with support for FCoE offload.

SUSE will supply a solution for affected customers which will be published as a kISO (Kernel Update ISO).

For more information on kISOs in general, see https://www.suse.com/communities/blog/kiso-kernel-update-iso/.

3.1.2 Installing Systems from Online Repositories

To install SLES, you need the installation media. If you also mirror the repositories, for example with SMT, this means that effectively you need to download all packages twice: once as a part of the media and additionally from the online repository.

For such scenarios, we provide packages named tftpboot-installation-* in the product repositories. These packages include an installer prepared for a network boot environment (PXE).

To use them, configure the PXE environment (DHCP, TFTP servers) and install the package for the respective product and architecture. Make sure to adjust the included configuration, so that the correct local URLs are passed to the installer.

3.1.3 Network Interfaces Configured via linuxrc Take Precendence

Tip

This entry has appeared in a previous release notes document.

For some configurations with many network interfaces, it can take several hours until all network interfaces are initialized (see https://bugzilla.suse.com/show_bug.cgi?id=988157 (https://bugzilla.suse.com/show_bug.cgi?id=988157)). In such cases, the installation is blocked. SLE 12 SP1 and earlier did not offer a workaround for this behavior.

Starting with SLE 12 SP2, you can speed up interactive installations on systems with many network interfaces by configuring them via linuxrc. When a network interface is configured via linuxrc, YaST will not perform automatic DHCP configuration for any interface. Instead, YaST will continue to use the configuration from linuxrc.

To configure a particular interface via linuxrc, add the following to the boot command line before starting the installation:

ifcfg=eth0=dhcp

In the parameter, replace eth0 with the name of the appropriate network interface. The ifcfg option can be used multiple times.

3.1.4 Warning When Enabling Snapshots on Small Root File Systems

Btrfs file system snapshots take up extra disk space. Previous versions of SLE did not check during installation whether a custom root file system size was appropriate for enabling snapshots.

For Btrfs root file systems with snapshotting, the SLE installer now verifies that the size of the file system at least matches the value of root_base from the product's control.xml. For example, for a default SLES installation, the root file system size is 12 GB. If the file system is smaller, the installer will display a warning - which can be ignored.

3.1.5 SMT: Upgrading Database Schema and Engine

Tip

This entry has appeared in a previous release notes document.

SMT 12 comes with a new database schema and is standardized on the InnoDB database back-end.

In order to upgrade SMT 11 SPx to SMT 12, it is necessary that SMT 11 is configured against SCC (SUSE Customer Center) before initializing the upgrade of SLES and SMT to version 12 SP1 or newer. If the host is upgraded to SLES 12 SP1 or newer without switching to SCC first, the installed SMT instance will no longer work.

Only SMT 11 SP3 can be configured against SCC. Older versions need to be upgraded to version 11 SP3 first.

Whether the schema or database engine must be upgraded is checked during package upgrade and displayed as an update notification. Back up your database before doing the database upgrade. Both the schema and database engine upgrade are done by the utility /usr/bin/smt-schema-upgrade (can be called directly or via systemctl start smt-schema-upgrade) or are done automatically after smt.target restart (computer reboot or via systemctl restart smt.target). However, manual database tuning is required for optimal performance. For details, see https://mariadb.com/kb/en/mariadb/converting-tables-from-myisam-to-innodb/#non-index-issues (https://mariadb.com/kb/en/mariadb/converting-tables-from-myisam-to-innodb/#non-index-issues)

3.1.6 SMT Supports SCC Exclusively

Tip

This entry has appeared in a previous release notes document.

Support for NCC (Novell Customer Center) was removed from SMT. SMT can still serve SLE 11 clients, but must be configured to receive updates from SCC.

Before migrating from SMT 11 SP3, SMT must be reconfigured against SCC. Migration from older versions of SMT is not possible.

3.1.7 Installing with LVM2, Without a Separate /boot Partition

Tip

This entry has appeared in a previous release notes document.

SUSE Linux Enterprise 12 and newer generally supports the installation with a linear LVM2 without a separate /boot partition, for example to use it with Btrfs as the root file system, to achieve full system snapshot and rollback.

However, this setup is only supported under the following conditions:

  • Only linear LVM2 setups are supported.

  • There must be enough space in the partitioning "label" (the partition table) for the grub2 bootloader first stage files. If the installation of the grub2 bootloader fails, you will have to create a new partition table. CAVEAT: Creating a new partition table destroys all data on the given disk!

For a migration from an existing SUSE Linux Enterprise 11 system with LVM2 to SUSE Linux Enterprise 12 or newer, the /boot partition must be preserved.

3.2 Upgrade-Related Notes

This section includes upgrade-related information for SUSE Linux Enterprise Server 12 SP3. For information about general preparations and supported upgrade methods and paths, see the documentation at https://www.suse.com/documentation/sles-12/book_sle_deployment/data/cha_update_sle.html.

3.2.1 Error on Migration From SP2 to SP3 When HPC Module Is Selected

When the High Performance Computing module is selected, the following error message may be encountered during Migration from SLES 12 SP2 to SLES 12 SP3:

Can't get available migrations from server: SUSE::Connect::ApiError: The requested products '' are not activated on the system.
'/usr/lib/zypper/commands/zypper-migration' exited with status 1

The problem can be resolved by re-registering the HPC module using the following two commands:

  • rpm -e sle-module-hpc-release-POOL sle-module-hpc-release

  • SUSEConnect -p sle-module-hpc/12/x86_64

These commands can also be performed before migration as a preventive measure.

3.2.2 Automatic Log Rotation Will Be Disabled After Upgrade

If the package logrotate was installed or updated before systemd-presets-branding-SLE, automatic log rotation will be disabled after the upgrade to SLES 12 SP3.

Enable the logrotate systemd timer manually. To do so, run the following commands as root:

  1. systemctl enable logrotate.timer

  2. systemctl restart logrotate.timer

3.2.3 Online Migration with Live Patching Enabled

The SLES online migration process reports package conflicts when Live Patching is enabled and the kernel is being upgraded. This applies when crossing the boundary between two Service Packs.

To prevent the conflicts, before starting the migration, execute the following as a super user:

zypper rm $(rpm -qa kgraft-patch-*)

3.2.4 Online Migration: Checking the Status of Registered Products

It is common that during the lifecycle of a system installation, registered extensions and modules are removed from the system without also deactivating them on the registration server.

To prevent errors and unexpected behavior during an online migration, the status of installed products needs to be checked before the migration to allow reinstalling or deactivating products.

A new step has been added to the current online migration workflow, it will check the registered products that are not currently installed in the system and allows:

  • Trying to install the products from the available repositories (Install).

  • Deactivating the products in SCC (Deactivate).

3.2.5 Updating Registration Status After Rollback

Tip

This entry has appeared in a previous release notes document.

When performing a service pack migration, it is necessary to change the configuration on the registration server to provide access to the new repositories. If the migration process is interrupted or reverted (via restoring from a backup or snapshot), the information on the registration server is inconsistent with the status of the system. This may lead to you being prevented from accessing update repositories or to wrong repositories being used on the client.

When a rollback is done via Snapper, the system will notify the registration server to ensure access to the correct repositories is set up during the boot process. If the system was restored any other way or the communication with the registration server failed for any reason (for example, because the server was not accessible due to network issues), trigger the rollback on the client manually by calling snapper rollback.

We suggest always checking that the correct repositories are set up on the system, especially after refreshing the service using zypper ref -s.

3.2.6 /tmp Cleanup from sysconfig Automatically Migrated into systemd Configuration

Tip

This entry has appeared in a previous release notes document.

By default, systemd cleans temporary directories daily, and systemd does not honor sysconfig settings in /etc/sysconfig/cron such as TMP_DIRS_TO_CLEAR. Thus, it is needed to change sysconfig settings to avoid data loss or unwanted behavior.

When updating to SLE 12 or newer, the variables in /etc/sysconfig/cron will be automatically migrated into an appropriate systemd configuration (see /etc/tmpfiles.d/tmp.conf). The following variables are affected:

MAX_DAYS_IN_TMP
MAX_DAYS_IN_LONG_TMP
TMP_DIRS_TO_CLEAR
LONG_TMP_DIRS_TO_CLEAR
CLEAR_TMP_DIRS_AT_BOOTUP
OWNER_TO_KEEP_IN_TMP

3.3 For More Information

For more information, see Section 4, “Architecture Independent Information” and the sections relating to your respective hardware architecture.

4 Architecture Independent Information

Information in this section pertains to all architectures supported by SUSE Linux Enterprise Server 12 SP3.

4.1 Kernel

4.1.1 Support for Scalable MCA (SMCA)

As more functionality is added to hardware beginning with family 0x17, being able to track it requires an enhanced approach to MCA.

SLE 12 SP3 now supports AMD's Scalable MCA (SMCA). SMCA is a specification which enriches the error information logged by the hardware to allow for improved error handling, better diagnosability, and future scalability.

4.1.2 Update Repositories for kGraft Live Patching Are Now Specific to Service Packs

Starting with SLE 12 SP3, the update repositories supplying kernel patches that can be applied using kGraft are split up by Service Pack version. This allows for easier maintenance and reduces the chance of complications during Service Pack upgrades.

4.1.3 Support for Intel Kaby Lake Processors

SLE 12 SP3 now contains support for Intel processors from the generation code-named Kaby Lake.

4.1.4 Support for Intel Xeon Phi Knights Landing Coprocessors

SLE 12 SP3 now supports Intel Xeon Phi coprocessors from the product line code-named Kights Landing.

4.1.5 NVDIMM: Support for Device DAX (Direct Access)

Device DAX is the device-centric analogue of File System DAX: It allows memory ranges to be allocated and mapped without need of an intervening file system. This feature can improve the performance of both KVM guests and databases like MSSQL using raw I/O access to NVDIMM.

4.2 Kernel Modules

An important requirement for every enterprise operating system is the level of support available for specific environments. Kernel modules are the most relevant connector between hardware (controllers) and the operating system.

For more information about the handling of kernel modules, see the SUSE Linux Enterprise Administration Guide.

4.2.1 Support for Matrox G200eH3 Graphics Chips

SLE 12 SP3 includes a driver to enable Matrox G200eH3 graphics chips that will be used in HPE Gen10 servers.

4.2.2 hpwdt Driver (HPE Watchdog) Has Been Updated

SLE 12 SP3 includes an updated version of the HPE watchdog driver hpwdt to enable support for the upcoming HPE Gen10 Servers.

4.3 Security

4.3.1 SELinux Enablement

Tip

This entry has appeared in a previous release notes document.

SELinux capabilities have been added to SUSE Linux Enterprise Server (in addition to other frameworks, such as AppArmor). While SELinux is not enabled by default, customers can run SELinux with SUSE Linux Enterprise Server if they choose to.

SELinux Enablement includes the following:

  • The kernel ships with SELinux support.

  • We will apply SELinux patches to all “common” userland packages.

  • The libraries required for SELinux (libselinux, libsepol, libsemanage, etc.) have been added SUSE Linux Enterprise.

  • Quality Assurance is performed with SELinux disabled—to make sure that SELinux patches do not break the default delivery and the majority of packages.

  • The SELinux-specific tools are shipped as part of the default distribution delivery.

  • SELinux policies are not provided by SUSE. Supported policies may be available from the repositories in the future.

  • Customers and Partners who have an interest in using SELinux in their solutions are encouraged to contact SUSE to evaluate their necessary level of support and how support and services for their specific SELinux policies will be granted.

By enabling SELinux in our code base, we add community code to offer customers the option to use SELinux without replacing significant parts of the distribution.

4.3.2 TPM-Capable UEFI Bootloader

SLES 12 SP3 has TPM support in the bootloader used on UEFI systems.

4.4 Networking

4.4.1 Support for the IDNA2008 Standard for Internationalized Domain Names

The original method for implementing Internationalized Domain Names was IDNA2003. This has been replaced by the IDNA2008 standard, the use of which is mandatory for some top-level domains.

The network utilities wget and curl have been updated to support IDNA2008 through the use of libidn2. This update also affects consumers of the libcurl library.

4.4.2 No Support for Samba as Active Directory-Style Domain Controller

Tip

This entry has appeared in a previous release notes document.

The version of Samba shipped with SLE 12 GA and newer does not include support to operate as an Active Directory-style domain controller. This functionality is currently disabled, as it lacks integration with system-wide MIT Kerberos.

4.5 Systems Management

4.5.1 System Clone AutoYaST XML Reflects Btrfs Snapshot State

In previous versions of SLE 12, when using yast clone_system, AutoYaST would always enable snapshots for Btrfs Volumes, regardless of whether they were enabled on the original system.

Starting with SLE 12 SP3, yast clone_system will now create an AutoYaST XML file that accurately reflects snapshot state of Btrfs volumes.

4.5.2 "Register Extensions or Modules Again" Has Been Removed from YaST

The button Register Extensions or Modules Again has been removed from the YaST registration module.

This option was redundant: It is still possible to register modules or extensions again with a different SCC account or using a different registration server (SCC or SMT).

Additionally, the option to filter out beta versions is now only visible if the server provides beta versions, otherwise the check box is hidden.

4.5.3 The YaST Module for SSH Server Configuration Has Been Removed

The YaST module for configuring an SSH server which was present in SLE 11, is not a part of SLE 12. It does not have any direct successor.

The module SSH Server only supported configuring a small subset of all SSH server capabilities. Therefore, the functionality of the module can be replaced by using a combination of 2 YaST modules: The /etc/sysconfig Editor and the Services Manager. This also applies to system configuration via AutoYaST.

4.5.4 Sudo Has Been Updated from 1.8.10p3 to 1.8.19p2

Sudo has been updated from version 1.8.10p3 to 1.8.19p2. This update fixes many bugs and security vulnerabilities and also brings several enhancements. For more information, read the changelog file in /usr/share/doc/packages/sudo/NEWS.

4.5.5 YaST: Default Auto-Refresh Status for Local Repositories Is "Off"

In previous versions of SLE 12, when installing from a USB drive or external disk, the repository linking to the installation media was set to auto-refresh. This means that when the USB drive or the external disk had been removed and you are trying to work with YaST or Zypper, you were asked to insert the external medium again.

In the YaST version shipped with SLE 12 SP3, we have changed the default auto-refresh status for local repositories (USB drives, hard disks or dir://) to off which avoids checking the now usually unnecessary repository.

4.5.6 All Snapper Commands Support the Option --no-dbus

Normally, the snapper command line tool uses DBus to connect to snapperd which does most of the actual work. This allows non-root users to work with Snapper.

However, there are situations when using DBus is not possible, for example, when chrooted on the rescue system or when DBus itself is broken after an update. This can limit the usefulness of Snapper as a disaster recovery tool. Therefore, some Snapper commands already supported the --no-dbus option, bypassing DBus and snapperd.

In the version of Snapper shipped with SLE 12 SP3, all Snapper commands support the --no-dbus option.

4.5.7 ntp 4.2.8

Tip

This entry has appeared in a previous release notes document.

ntp was updated to version 4.2.8.

  • The ntp server ntpd does not synchronize with its peers anymore and the peers are specified by their host name in /etc/ntp.conf.

  • The output of ntpq --peers lists IP numbers of the remote servers instead of their host names.

Name resolution for the affected hosts works otherwise.

Parameter changes

The meaning of some parameters for the sntp command-line tool have changed or have been dropped, for example sntp -s is now sntp -S. Please review any sntp usage in your own scripts for required changes.

After having been deprecated for several years, ntpdc is now disabled by default for security reasons. It can be re-enabled by adding the line enable mode7 to /etc/ntp.conf, but preferably ntpq should be used instead.

4.5.8 Support for Setting Kdump Low-Memory and High-Memory Allocation on the YaST Command Line

In the past, YaST supported setting a high-memory and low-memory amounts for the kernel parameter crashkernel only from the ncurses or Qt interfaces.

You can now set these memory amounts on the command line too. To do so, use, for example, yast kdump startup enable alloc_mem=256,768. The first number represents the low-memory amount, the second number represents the high-memory amount. Therefore, the example is equivalent to setting crashkernel=256,low crashkernel=768,high on the kernel command line.

4.5.9 Salt Configuration with AutoYaST

With SLE 12 SP3, it is possible to configure Salt clients using AutoYaST. To use this feature, you need the package salt-minion which is not available in the standard SLES product. However, you can install this dependency from the SLE Module Advanced Systems Management.

4.5.10 Zypper Option --plus-content Has Been Enhanced

The zypper option --plus-content was enhanced to allow specifying disabled repositories by name or alias also. Additionally, it can now be used with the zypper refresh command, to refresh either specified or all disabled repositories without the need to enable them.

4.5.11 YaST: iSCSI Authentication Has Been Redesigned

In the past, the user interface for iSCSI authentication offered by YaST was not optimal. Additionally, not every option was explained in the help.

In SLE 12 SP3, the YaST module iSCSI Initiator and Target has with the following enhancements:

  • Clearer terminology:

    • For discovery sessions, No Authentication is now called No Discovery Authentication.

      For login sessions, Use Authentication is now called Use Login Authentication, whereas No Authentication is now called No Login Authentication.

    • Incoming Authentication is now called Authentication by Initiators on the initiator side, whereas it is called Authentication by Targets on the target side.

    • Outgoing Authentication is now called Authentication by Targets on the initiator side, whereas it is called Authentication by Initiators on the target side.

  • No Login Authentication can now be used to log in to targets without authentication.

  • The help now explains password options.

4.5.12 systemd Daemon

Tip

This entry has appeared in a previous release notes document.

SLE 12 has moved to systemd, a new way of managing services. For more information, see the SUSE Linux Enterprise Admin Guide, Section The systemd Daemon (https://www.suse.com/documentation/sles-12/ (https://www.suse.com/documentation/sles-12/).

4.6 Storage

4.6.1 Automatic Cleanup of Snapshots Created by Rollbacks

In SLES 12 SP2 and before, you had to manually delete snapshots created by rollbacks at an appropriate time to avoid filling up the storage.

Starting with SLE 12 SP3, this process has been automated. During a rollback, Snapper sets the cleanup algorithm number for the snapshot corresponding to the previous default subvolume and for the backup snapshot of the previous default subvolume.

For more information, see http://snapper.io/2017/05/10/automatic-cleanup-after-rollback.html (http://snapper.io/2017/05/10/automatic-cleanup-after-rollback.html)

4.6.2 Establishing an NVMe-over-Fabrics Connection

To be able to establish an NVMe-over-Fabrics connection with the Linux kernel provided with the SLE 12 SP3 media, you need to delete or rename the file /etc/nvme/hostid.

To restore this file when the kernel update that fixes this issue is released, generate a new host ID by running:

uuidgen > /etc/nvme/hostid

4.6.3 Root File System Conversion to Btrfs Not Supported

Tip

This entry has appeared in a previous release notes document.

If it is not the root file system and if the file system has at least 20 % free space available, in-place conversion of an existing Ext2/Ext3/Ext4 or ReiserFS file system is supported for data mount points.

SUSE does not recommend or support in-place conversion of OS root file systems. In-place conversion to Btrfs of root file systems requires manual subvolume configuration and additional configuration changes that are not automatically applied for all use cases.

To ensure data integrity and the highest level of customer satisfaction, when upgrading, maintain existing root file systems. Alternatively, reinstall the entire operating system.

4.6.4 /var/cache on an Own Subvolume for Snapshots and Rollback

Tip

This entry has appeared in a previous release notes document.

/var/cache contains very volatile data, like the Zypper cache with RPM packages in different versions for each update. As a result of storing data that is mostly redundant but highly volatile, the amount of disk space a snapshot occupies can increase very fast.

To solve this, move /var/cache to a separate subvolume. On fresh installations of SLE 12 SP2 or newer, this is done automatically. To convert an existing root file system, perform the following steps:

  1. Find out the device name (/dev/sda2, /dev/sda3 etc.) of the root file system: df /

  2. Identify the parent subvolume of all the other subvolumes. For SLE 12 installations, this is a subvolume named @. To check if you have a @ subvolume, use: btrfs subvolume list / | grep '@'. If the output of this command is empty, you do not have a subvolume named @. In that case, you may be able to proceed with subvolume ID 5 which was used in older versions of SLE.

  3. Now mount the requisite subvolume.

    • If you have a @ subvolume, mount that subvolume to a temporary mount point: mount <root_device> -o subvol=@ /mnt

    • If you don't have a @ subvolume, mount subvolume ID 5 instead: mount <root_device> -o subvolid=5 /mnt

  4. /mnt/var/cache can already exist and could be the same directory as /var/cache. To avoid data loss, move it: mv /mnt/var/cache /mnt/var/cache.old

  5. In either case, create a new subvolume: btrfs subvol create /mnt/var/cache

  6. If there is now a directory /var/cache.old, move it to the new location: mv /var/cache.old/* /mnt/var/cache. If that is not the case, instead do: mv /var/cache/* /mnt/var/cache/

  7. Optionally, remove /mnt/var/cache.old: rm -rf /mnt/var/cache.old

  8. Unmount the subvolume from the temporary mount point: umount /mnt

  9. Add an entry to /etc/fstab for the new /var/cache subvolume. Use an existing subvolume as a template to copy from. Make sure to leave the UUID untouched (this is the root file system's UUID) and change the subvolume name and its mount point consistently to /var/cache.

  10. Mount the new subvolume as specified in /etc/fstab: mount /var/cache

4.6.5 Support for Arbitrary Btrfs Subvolume Structure in AutoYaST

To set up a system with a non-default Btrfs subvolume structure with AutoYaST, you can now specify an arbitrary Btrfs subvolume structure in autoinst.xml.

4.6.6 Snapper: Cleanup Rules Based on Fill Level

Tip

This entry has appeared in a previous release notes document.

Some programs do not respect the special disk space characteristics of a Btrfs file system containing snapshots. This can result in unexpected situations where no free space is left on a Btrfs filesystem.

Snapper can watch the disk space of snapshots that have automatic cleanup enabled and can try to keep the amount of disk space used below a threshold.

If snapshots are enabled, the feature is enabled for the root file system by default on new installations.

For existing installations, the system administrator must enable quota and set limits for the cleanup algorithm to use this new feature. This can be done using the following commands:

  1. snapper setup-quota

  2. snapper set-config NUMBER_LIMIT=2-10 NUMBER_LIMIT_IMPORTANT=4-10

For more information, see the man pages of snapper and snapper-configs.

4.7 Virtualization

4.7.1 KVM

4.7.1.1 KVM Now Supports up to 288 vCPUs

KVM now supports up to 288 vCPUs in a virtual machine.

4.7.1.2 Support for AVIC (Advanced Virtual Interrupt Controller)

In the past, LAPIC interrupts (Local Advanced Interrupt Controller) on AMD processors had to be virtualized with software which did not yield optimal performance.

The version of KVM shipped with SLE 12 SP3 can use AVIC (Advanced Virtual Interrupt Controller), a hardware feature in recent AMD processors, to provide a virtualized LAPIC to the guest. This improves the virtualization performance.

AVIC is a set of components to present a virtualized LAPIC to guests, thus allowing most LAPIC accesses and interrupt delivery to the guests directly. The AVIC architecture also leverages the existing IOMMU interrupt redirection mechanism to deliver peripheral device interrupts to guests directly.

4.8 Miscellaneous

4.8.1 GNOME: Support for Chinese, Japanese, Korean Installed and Configured Automatically

When first logging in to GNOME on SLES 12 SP3 with the Workstation Extension or SLED 12 SP3, gnome-initial-setup will ask Chinese, Japanese, and Korean users for their preferred input method.

Because gnome-initial-setup is set up to run directly after the first login, it is also set up to not run before the GDM interface starts. This behavior is configured in the GDM configuration file /etc/gdm/custom.conf with the line InitialSetupEnable=False. Do not change this setting, otherwise a system without a normal user will not be able to provide the expected GDM log-in window.

5 AMD64/Intel 64 (x86_64) Specific Information

Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the AMD64/Intel 64 architectures.

5.1 System and Vendor Specific Information

5.1.1 Intel* Omni-Path Architecture (OPA) Host Software

Intel Omni-Path Architecture (OPA) host software is fully supported in SUSE Linux Enterprise Server 12 SP3.

Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment.

For instructions on installing Intel Omni-Path Architecture documentation, see https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_OP_Software_SLES_12_3_RN_J71758.pdf.

5.1.2 Support for Both TPM 1.2 and 2.0

Over the recent years TPM 2.0 variants have become more common. With a different API this needs new libraries, tools and bootloader support.

SLE 12 SP2 and also SP3 provide equal support for TPM 1.2 and TPM 2.0 utilities and booting.

6 POWER (ppc64le) Specific Information

Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the POWER architecture.

6.1 QEMU-virtualized PReP Partition

On POWER, the PReP partition which contains the bootloader has no unique identifier other than the serial number of the disk on which it created. When virtualized with QEMU, QEMU does not provide any disk serial number unless you explicitly specify one.

This means that when running under QEMU, the PReP partition of an installation does not have any unique identification. In consequence, the partition name can change when a disk is added or removed from the virtual machine or when the storage configuration otherwise changes. This can lead to system errors when reinstalling or updating the bootloader.

If you expect the storage configuration of a QEMU virtual machine on POWER to change over the lifetime of the installation, we recommend sidestepping this issue: Before the initial installation, assign a unique serial number to each disk in a QEMU virtual machine.

6.2 kdump: Shorter Time to Filter and Save /proc/vmcore

The updated makedumpfile tool shipped SLES 12 SP3 supports multithreading. This can be leveraged to reduce the time spent in the capture kernel by following the below steps:

  1. Set KDUMP_CPUS=[CPUS] in the file /etc/sysconfig/kdump. Replace [CPUS] with the number of CPUs to use in the Kdump kernel.

  2. Set MAKEDUMPFILE_OPTIONS="--num-threads [CPUs-1]". Using one CPU less than there are total active CPUs can improve performance.

  3. Set KDUMPTOOL_FLAGS=NOSPLIT in the file /etc/sysconfig/kdump.

  4. Restart kdump.service.

6.3 Parameter crashkernel Is Now Used for fadump Memory Reservation

Starting with SLE 12 SP3, to reserve memory for fadump, use the parameter crashkernel parameter instead of the deprecated parameter fadump_reserve_mem. The offset for fadump is calculated in the kernel. Therefore, if you provide an offset in the parameter crashkernel=, it will be ignored.

6.4 Encryption Improvements Using Hardware Optimizations

The performance of kernel XTS mode on POWER platforms has been improved in SLES 12 SP3 by exploiting instruction set enhancements. On POWER8, it now runs up to 20 times faster than in SLES 12 SP2. Kernel CBC and CTR modes were already optimized in a previous release.

To ensure that your kernel is using the accelerated POWER kernel crypto implementations, verify that the module vmx_crypto has been loaded:

lsmod | grep vmx_crypto

6.5 Ceph Client Support on z Systems and POWER

On SLES 12 SP2 and SLES 12 SP3, z Systems and POWER machines can now function as SUSE Enterprise Storage (Ceph) clients.

This support is possible because the kernels for z Systems and POWER now have the relevant modules for CephFS and RBD enabled. The Ceph client RPMs for z Systems and POWER are included in SLE 12 SP3. Additionally, the QEMU packages for z Systems and POWER are now built against librbd.

6.6 Memory Reservation Support for fadump in YaST

Memory to be reserved for firmware-assisted dumps (also known as fadump, available on the POWER architecture) can now be specified in the Kdump module of YaST.

7 IBM z Systems (s390x) Specific Information

Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the IBM z Systems architecture. For more information, see http://www.ibm.com/developerworks/linux/linux390/documentation_novell_suse.html

IBM zEnterprise 196 (z196) and IBM zEnterprise 114 (z114) further on referred to as z196 and z114.

7.1 Virtualization

7.1.1 qeth Device Driver Has Accelerated set_rx_mode Implementation

Improved initialization of qeth network devices in layer 2 and layer 3 allows for faster booting Linux instances.

7.2 Storage

7.2.1 parted Augmented with Partitioning Functionality as Provided by z Systems Tools

The partitioning utility parted now includes partitioning functionality for FBA and ECKD DASDs. This brings parted up to par with the functionality provided by z Systems tools.

7.2.2 DASD Channel Path-Aware Error Recovery

The DASD driver can now exclude paths from normal operation if other channel paths are available.

7.2.3 New dasdfmt Quick Format Mode

With the new quick format mode, you can define DASD volumes with a pre-formatted track layout. This significantly reduces the deployment time of DASD volumes.

7.2.4 Ceph Client Support on z Systems and POWER

On SLES 12 SP2 and SLES 12 SP3, z Systems and POWER machines can now function as SUSE Enterprise Storage (Ceph) clients.

This support is possible because the kernels for z Systems and POWER now have the relevant modules for CephFS and RBD enabled. The Ceph client RPMs for z Systems and POWER are included in SLE 12 SP3. Additionally, the QEMU packages for z Systems and POWER are now built against librbd.

7.2.5 GPFS Partition Type in fdasd

The new partition type "GPFS" in the fdasd tool supports fast identification and handles partitions that contain GPFS Network Shared Disks.

7.2.6 LUN Scanning Enabled by Default

Tip

This entry has appeared in a previous release notes document.

Unlike in SLES 11, LUN scanning is enabled by default in SLES 12 and newer. Instead of having a user-maintained whitelist of FibreChannel/SCSI disks that are brought online to the guest, the system now polls all targets on a fabric. This is especially helpful on systems with hundreds of zFCP disks and exclusive zoning.

However, on systems with few disks and an open fabric, this can lead to long boot times or access to inappropriate disks. It can also lead to difficulties offlining and removing disks.

To disable LUN scanning, set the boot parameter zfcp.allow_lun_scan=0.

For LUN Scanning to work properly, the minimum storage firmware levels are:

  • DS8000 Code Bundle Level 64.0.175.0

  • DS6000 Code Bundle Level 6.2.2.108

7.3 Network

7.3.1 snIPL: Hardening

Secure connections for snIPL enable Linux to remotely handle a greater variety of environments.

7.4 Security

7.4.1 libica with DRBG Random Number Generation

The package libica now includes a DRBG (Deterministic Random Bit Generator) that is compliant with the updated security specification NIST SP 800-90A for pseudo-random number generation.

7.4.2 Toleration Support for New Cryptography Hardware

SLES 12 SP3 includes support for using new cryptography hardware in toleration mode. This allows performing cryptographic operations as on older hardware which means an easier migration to new hardware.

7.5 Reliability, Availability, Serviceability (RAS)

7.5.1 Stable PCI Identifiers Using UIDs

To maintain persistent configurations for PCI devices, SLES 12 SP3 now provides stable and unique identifiers for PCI functions for as long as the I/O configuration (IOCDS and HCD) remains stable.

7.5.2 Hardware Breakpoint Support in GDB

When code needs to be treated as read-only, software breakpoints cannot be used. GDB can now use hardware breakpoints for debugging.

7.6 Performance

7.6.1 Support for 2 GB Memory Pages

Applications with huge memory sets can use 2 GB large memory pages for improved memory handling.

7.6.2 Extended CPU Topology to Support Drawers

Addressing CPUs across drawers improves scheduling and performance analysis on IBM z Systems z13 and later hardware.

8 ARM 64-Bit (AArch64) Specific Information

Information in this section pertains to the version of SUSE Linux Enterprise Server 12 SP3 for the AArch64 architecture.

8.1 AppliedMicro X-C1 Server Development Platform (Mustang) Firmware Requirements

In between SUSE Linux Enterprise Server 12 SP2 and SP3, some AppliedMicro X-Gene drivers and the corresponding Device Tree bindings were changed in an incompatible way. X-C1 devices that successfully boot SUSE Linux Enterprise Server 12 SP2 may be unable to install SP3 without changes. Symptoms include a crash in the mdio-xgene network driver.

The updated X-Gene drivers in SP3 require the Device Tree provided by the vendor's firmware version 3.06.25 or later. To install SLES 12 SP3, first need to ensure that the AppliedMicro TianoCore bootloader firmware is updated according to the instructions provided by the vendor. For any questions about obtaining and upgrading this firmware, contact the hardware vendor.

After updating the firmware to a new version, it may in turn no longer be possible to run SLES 12 SP2, unless the firmware is downgraded to a lower version again.

8.2 New System-on-Chip Driver Enablement

Drivers for the following additional System-on-Chip platforms have been enabled in the SP3 kernel:

  • AppliedMicro X-Gene 3

  • Cavium ThunderX2 CN99xx

  • HiSilicon Hi1616

  • Marvell Armada 7K/8K

  • Qualcomm Centriq 2400 series

  • Rockchip RK3399

8.3 Support for OpenDataPlane on Cavium ThunderX and Octeon TX Platforms

This release supports OpenDataPlane (ODP) API version 1.11.0.0, also known as Monarch LTS.

Platform Compatilibility

This release is compatible with the generic AArch64 platforms and the following Cavium platforms:

  • ThunderX (CN88XX)

  • Octeon TX (CN81XX, CN83XX)

System Requirements

Your system needs to meet certain requirements before ODP can be used. The general requirements are as follows:

  • Cunit shared library in rootfs (for running unit tests and proper configure)

  • vfio, thunder-nicpf and BGX modules loaded into the kernel (typically no driver needs to be loaded since all mentioned modules are compiled into kernel image)

  • Hugetlbfs must be mounted with considerable amount of pages added to it (minimum 256 MB of memory). ODP ThunderX uses huge pages for maximum performance by eliminating TLB misses. For some hardware-related cases, physically contiguous memory is needed. Therefore the ODP ThunderX memory allocator tries to allocate contiguous memory area.

    If the ODP application has startup problems, we recommend increasing the hugepage pool by adding more pages to the pool than required.

  • NIC VFs need to be bound to the VFIO framework

8.4 KVM on AArch64

Tip

This entry has appeared in a previous release notes document.

KVM virtualization has been enabled and is supported on some system-on-chip platforms for mutually agreed-upon partner-specific use cases. It is only supported on partner certified hardware and firmware. Not all QEMU options and backends are available on AArch64. The same statement is applicable for other virtualization tools shipped on AArch64.

8.5 Toolchain Module Enabled in Default Installation

Tip

This entry has appeared in a previous release notes document.

The system compiler (gcc4.8) is not supported on the AArch64 architecture. To work around this issue, you previously had to enable the Toolchain module manually and use the GCC version from that module.

On AArch64, the Toolchain Module is now automatically pre-selected after registering SLES during installation. This makes the latest SLE compilers available on all installations. You now only need to make sure to also use that compiler.

Important
Important: When Using AutoYaST, Make Sure to Enable Toolchain Module

Be aware that when using AutoYaST to install, you have to explicitly add the Toolchain module into the XML installation profile.

9 Packages and Functionality Changes

This section comprises changes to packages, such as additions, updates, removals and changes to the package layout of software. It also contains information about modules available for SUSE Linux Enterprise Server. For information about changes to package management tools, such as Zypper or RPM, see Section 4.5, “Systems Management”.

9.1 Updated Packages

9.1.1 GnuTLS Has Been Updated to Version 3.3

Some programs require GnuTLS version 3.3 or newer to work.

The upgrade from GnuTLS 3.2 to GnuTLS 3.3 is an update which does not change of the major version of libgnutls28, so existing programs will continue to work.

The library libgnutls-xssl.so was not used by other programs and has been removed.

9.1.2 Open vSwitch Has Been Updated to Version 2.7.0

Open vSwitch has been updated to 2.7.0.

Important changes include:

  • Various OpenFlow bug fixes

  • Improved support for OpenFlow

  • Support for new OpenFlow extensions

  • Performance improvements

  • Support for IPsec tunnels has been removed

  • Changes relating to DPDK:

    • Support for DPDK 16.11

    • Support for jumbo frames

    • Support for rx checksum offload

    • Support for port hotplugging

For more detailed information about changes between version 2.6.0 and 2.7.0, see https://github.com/openvswitch/ovs/blob/master/NEWS (https://github.com/openvswitch/ovs/blob/master/NEWS).

9.1.3 Upgrading PostgreSQL Installations from 9.1 to 9.4

Tip

This entry has appeared in a previous release notes document.

To upgrade a PostgreSQL server installation from version 9.1 to 9.4, the database files need to be converted to the new version.

Note: System Upgrade from SLE 11

On SLE 12, there are no PostgreSQL 8.4 or 9.1 packages. This means, you first must migrate PostgreSQL from 8.4 or 9.1 to 9.4 on SLE 11 before upgrading the system from SLE 11 to SLE 12.

Newer versions of PostgreSQL come with the pg_upgrade tool that simplifies and speeds up the migration of a PostgreSQL installation to a new version. Formerly, it was necessary to dump and restore the database files which was much slower.

To work, pg_upgrade needs to have the server binaries of both versions available. To allow this, we had to change the way PostgreSQL is packaged as well as the naming of the packages, so that two or more versions of PostgreSQL can be installed in parallel.

Starting with version 9.1, PostgreSQL package names on SUSE Linux Enterprise products contain numbers indicating the major version. In PostgreSQL terms, the major version consists of the first two components of the version number, for example, 9.1, 9.3, and 9.4. So, the packages for PostgreSQL 9.3 are named postgresql93, postgresql93-server, etc. Inside the packages, the files were moved from their standard location to a versioned location such as /usr/lib/postgresql93/bin or /usr/lib/postgresql94/bin. This avoids file conflicts if multiple packages are installed in parallel. The update-alternatives mechanism creates and maintains symbolic links that cause one version (by default the highest installed version) to re-appear in the standard locations. By default, database data is stored under /var/lib/pgsql/data on SUSE Linux Enterprise.

The following preconditions have to be fulfilled before data migration can be started:

  1. If not already done, the packages of the old PostgreSQL version (9.3) must be upgraded to the latest release through a maintenance update.

  2. The packages of the new PostgreSQL major version need to be installed. For SLE 12, this means installing postgresql94-server and all the packages it depends on. Because pg_upgrade is contained in the package postgresql94-contrib, this package must be installed as well, at least until the migration is done.

  3. Unless pg_upgrade is used in link mode, the server must have enough free disk space to temporarily hold a copy of the database files. If the database instance was installed in the default location, the needed space in megabytes can be determined by running the following command as root: du -hs /var/lib/pgsql/data. If space is tight, it might help to run the VACUUM FULL SQL command on each database in the PostgreSQL instance to be migrated which might take very long.

Upstream documentation about pg_upgrade including step-by-step instructions for performing a database migration can be found under file:///usr/share/doc/packages/postgresql94/html/pgupgrade.html (if the postgresql94-docs package is installed), or online under http://www.postgresql.org/docs/9.4/static/pgupgrade.html (http://www.postgresql.org/docs/9.4/static/pgupgrade.html). NOTE: The online documentation explains how you can install PostgreSQL from the upstream sources (which is not necessary on SLE) and also uses other directory names (/usr/local instead of the update-alternatives based path as described above).

For background information about the inner workings of pg_admin and a performance comparison with the old dump and restore method, see http://momjian.us/main/writings/pgsql/pg_upgrade.pdf (http://momjian.us/main/writings/pgsql/pg_upgrade.pdf).

9.1.4 ntp 4.2.8

Tip

This entry has appeared in a previous release notes document.

ntp was updated to version 4.2.8.

  • The ntp server ntpd does not synchronize with its peers anymore and the peers are specified by their host name in /etc/ntp.conf.

  • The output of ntpq --peers lists IP numbers of the remote servers instead of their host names.

Name resolution for the affected hosts works otherwise.

Configure ntpd to not run in chroot mode by setting

NTPD_RUN_CHROOTED="no"

in /etc/sysconfig/ntp. Then restart the service with:

systemctl restart ntpd

Due to the architecture of ntpd, it does not start reliably in a chroot environment. Furthermore, the daemon drops all capabilities except for the one needed to open sockets on reserved ports, so chroot is not required. If policy requirements mandate this, AppArmor can be used to further limit the process in what it can do.

Additional Information

The meaning of some parameters has changed, for example sntp -s is now sntp -S.

After having been deprecated for several years, ntpdc is now disabled by default for security reasons. It can be re-enabled by adding the line enable mode7 to /etc/ntp.conf, but preferably ntpq should be used instead.

9.1.5 MariaDB Replaces MySQL

Tip

This entry has appeared in a previous release notes document.

MariaDB is a backward-compatible replacement for MySQL.

If you update from SLE 11 to SLE 12 or later, it is advisable to do a manual backup before the system update. This can help if a start of the database has issues with the storage engine's on-disk layout.

After the update to SLE 12 or later, a manual step is required to actually get the database running (this way you quickly see if something goes wrong):

touch /var/lib/mysql/.force_upgrade
rcmysql start
# => redirecting to systemctl start mysql.service
rcmysql status
# => Checking for service MySQL:
# => ...

9.2 Removed and Deprecated Functionality

9.2.1 Docker Compose Has Been Removed from the Containers Module

Docker Compose is not supported as a part of SUSE Linux Enterprise Server 12. While it was temporarily included as a Technology Preview, testing showed that the technology was not ready for enterprise use.

SUSE's focus is on Kubernetes which provides better value in terms of features, extensibility, stability and performance.

9.2.2 Packages and Features to Be Removed in the Future

9.2.2.1 libcgroup1 Deprecated Starting with SLE 12 SP4

Most functionality of libcgroup1 is also provided by systemd. In fact, the cgroup handling of libcgroup1 can conflict with that of systemd.

Starting with SLE 12 SP4, libcgroup1 will be deprecated. Therefore, you should start migrating to the equivalent functionality in systemd with SLE 12 SP3.

For more information, see https://www.suse.com/support/kb/doc/?id=7018741 (https://www.suse.com/support/kb/doc/?id=7018741).

9.2.2.2 Use /etc/os-release Instead of /etc/SuSE-release
Tip

This entry has appeared in a previous release notes document.

Starting with SLE 12, the /etc/SuSE-release file has been deprecated. Do not use it to identify a SUSE Linux Enterprise system anymore. This file will be removed in a future Service Pack or release.

To determine the release, use the file /etc/os-release instead. This file is a cross-distribution standard to identify Linux systems. For more information about the syntax, see the os-release man page (man os-release).

9.3 Changes in Packaging and Delivery

9.3.1 OFED-related Packages Replaced by Packages From New Upstream

In SLE 12 SP2 and earlier, the OFED (OpenFabric Enterprise Distribution) stack came directly from OFED.

Since the release of SLES 12 SP2, most of this stack has been upstreamed to the Linux RDMA project. This has resulted in an influx of contributions to the project and much improved source.

With SLE 12 SP3, we have updated the OFED stack to the version from the new upstream. This has brought the following package changes:

  • The package rdma is now called rdma-core

  • All -rdmav2 (providers for specific RDMA hardware) are integrated into the libibverbs package

  • libibverbs itself is in the libibverbs1 package

  • mlx4 and mlx5 are still shipped as separate packages, under the name libmlx4-1 and libmlx5-1, as they can be used standalone

  • libibcm-devel, libibumad-devel, librdmacm-devel and libibverbs-devel are all provided by the rdma-core-devel package

  • The static libraries are not provided anymore.

9.3.2 Support for Intel OPA Fabrics Moved to mvapich2-psm2 Package

Tip

This entry has appeared in a previous release notes document.

The version of the package mvapich2-psm originally shipped with SLES 12 SP2 and SLES 12 SP3 exclusively supported Intel Omni-Path Architecture (OPA) fabrics. In SLES 12 SP1 and earlier, this package supported the use of Intel True Scale fabrics instead.

This issue is fixed by a maintenance update providing an additional package named mvapich2-psm2 which only supports Intel OPA, whereas the original package mvapich2-psm only supports Intel True Scale fabrics again.

If you are currently using mvapich2-psm together with Intel OPA fabrics, make sure to switch to the new package mvapich2-psm2 after this maintenance update.

9.4 Modules

This section contains information about important changes to modules. For more information about available modules, see Section 2.9.1, “Available Modules”.

9.4.1 libgcrypt11 Available from the Legacy Module

The Legacy module now provides a package for libgcrypt11. This enables running applications built on SLES 11 against libgcrypt11 on SLES 12.

10 Technical Information

This section contains information about system limits, a number of technical changes and enhancements for the experienced user.

When talking about CPUs, we use the following terminology:

CPU Socket

The visible physical entity, as it is typically mounted to a motherboard or an equivalent.

CPU Core

The (usually not visible) physical entity as reported by the CPU vendor.

On IBM z Systems, this is equivalent to an IFL.

Logical CPU

This is what the Linux Kernel recognizes as a "CPU".

We avoid the word "thread" (which is sometimes used), as the word "thread" would also become ambiguous subsequently.

Virtual CPU

A logical CPU as seen from within a Virtual Machine.

10.1 Kernel Limits

http://www.suse.com/products/server/technical-information/#Kernel

This table summarizes the various limits which exist in our recent kernels and utilities (if related) for SUSE Linux Enterprise Server 12 SP3.

SLES 12 SP3 (Linux 4.4) AMD64/Intel 64 (x86_64)IBM z Systems (s390x)POWER (ppc64le)AArch64 (ARMv8)

CPU bits

64

64

64

64

Maximum number of logical CPUs

8192

256

2048

128

Maximum amount of RAM (theoretical/certified)

> 1 PiB/64 TiB

10 TiB/256 GiB

1 PiB/64 TiB

256 TiB/n.a.

Maximum amount of user space/kernel space

128 TiB/128 TiB

n.a.

512 TiB 1/2 EiB

256 TiB/128 TiB

Maximum amount of swap space

Up to 29 * 64 GB (x86_64) or 30 * 64 GB (other architectures)

Maximum number of processes

1048576

Maximum number of threads per process

Upper limit depends on memory and other parameters (tested with more than 120,000).

Maximum size per block device

Up to 8 EiB

FD_SETSIZE

1024

1 By default, the userspace memory limit on the POWER architecture is 128 TiB. However, you can explicitly request mmaps up to 512 TiB.

10.2 KVM Limits

SLES 12 SP3 Virtual Machine (VM) Limits

Maximum VMs per host

Unlimited (total number of virtual CPUs in all guests being no greater than 8 times the number of CPU cores in the host)

Maximum Virtual CPUs per VM

288

Maximum Memory per VM

4 TiB

Virtual Host Server (VHS) limits are identical to those of SUSE Linux Enterprise Server.

10.3 Xen Limits

Since SUSE Linux Enterprise Server 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.

SLES 12 SP3 Virtual Machine (VM) Limits

Maximum number of virtual CPUs per VM

64

Maximum amount of memory per VM

16 GiB x86_32, 511 GiB x86_64

SLES 12 SP3 Virtual Host Server (VHS) Limits

Maximum number of physical CPUs

256

Maximum number of virtual CPUs

256

Maximum amount of physical memory

5 TiB

Maximum amount of Dom0 physical memory

500 GiB

Maximum number of block devices

12,000 SCSI logical units

  • PV:  Paravirtualization

  • FV:  Full virtualization

For more information about acronyms, see the virtualization documentation provided at https://www.suse.com/documentation/sles-12/.

10.4 File Systems

https://www.suse.com/products/server/technical-information/#FileSystem

10.4.1 Comparison of Supported File Systems

SUSE Linux Enterprise was the first enterprise Linux distribution to support journaling file systems and logical volume managers back in 2000. Later, we introduced XFS to Linux, which today is seen as the primary work horse for large-scale file systems, systems with heavy load and multiple parallel reading and writing operations. With SUSE Linux Enterprise 12, we went the next step of innovation and started using the copy-on-write file system Btrfs as the default for the operating system, to support system snapshots and rollback.

+ supported
unsupported

FeatureBtrfsXFSExt4ReiserFS 2 OCFS 2 3

Data/metadata journaling

N/A 1

– / +

 

– / +

– / +

Journal internal/external

N/A 1

+ / +

+ / –

  

Offline extend/shrink

+ / +

– / –

+ / +

+ / –

 

Online extend/shrink

+ / +

+ / –

+ / –

+ / –

+ / –

Inode allocation map

B-tree

B+-tree

table

u. B*-tree

table

Sparse files

+

    

Tail packing

+

+

 

Defrag

+

   

ExtAttr/ACLs

+ / +

    

Quotas

+

    

Dump/restore

+

  

Block size default

4 KiB

Maximum file system size

16 EiB

8 EiB

1 EiB

16 TiB

4 PiB

Maximum file size

16 EiB

8 EiB

1 EiB

1 EiB

4 PiB

Support in products

SLE

SLE

SLE

SLE

SLE HA

  • 1 Btrfs is a copy-on-write file system. Rather than journaling changes before writing them in-place, it writes them to a new location and then links the new location in. Until the last write, the new changes are not committed. Due to the nature of the file system, quotas are implemented based on subvolumes (qgroups).

    The block size default varies with different host architectures. 64 KiB is used on POWER, 4 KiB on most other systems. The actual size used can be checked with the command getconf PAGE_SIZE.

  • 2 ReiserFS is supported for existing file systems. The creation of new ReiserFS file systems is discouraged.

  • 3 OCFS2 is fully supported as part of the SUSE Linux Enterprise High Availability Extension.

The maximum file size above can be larger than the file system's actual size due to usage of sparse blocks. Note that unless a file system comes with large file support (LFS), the maximum file size on a 32-bit system is 2 GB (2^31 bytes). Currently all of our standard file systems (including Ext3 and ReiserFS) have LFS, which gives a maximum file size of 2^63 bytes in theory. The numbers in the above tables assume that the file systems are using 4 KiB block size. When using different block sizes, the results are different, but 4 KiB reflects the most common standard.

In this document: 1024 Bytes = 1 KiB; 1024 KiB = 1 MiB; 1024 MiB = 1 GiB; 1024 GiB = 1 TiB; 1024 TiB = 1 PiB; 1024 PiB = 1 EiB. See also http://physics.nist.gov/cuu/Units/binary.html.

NFSv4 with IPv6 is only supported for the client side. An NFSv4 server with IPv6 is not supported.

The version of Samba shipped with SUSE Linux Enterprise Server 12 SP3 delivers integration with Windows 7 Active Directory domains. In addition, we provide the clustered version of Samba as part of SUSE Linux Enterprise High Availability Extension 12 SP3.

10.4.2 Supported Btrfs Features

The following table lists supported and unsupported Btrfs features across multiple SLES versions.

+ supported
unsupported

FeatureSLES 11 SP4SLES 12 GASLES 12 SP1SLES 12 SP2SLES 12 SP3
Copy on Write+++++
Snapshots/Subvolumes+++++
Metadata Integrity+++++
Data Integrity+++++
Online Metadata Scrubbing+++++
Automatic Defragmentation
Manual Defragmentation+++++
In-band Deduplication
Out-of-band Deduplication+++++
Quota Groups+++++
Metadata Duplication+++++
Multiple Devices++++
RAID 0++++
RAID 1++++
RAID 10++++
RAID 5
RAID 6
Hot Add/Remove++++
Device Replace
Seeding Devices
Compression+++
Big Metadata Blocks++++
Skinny Metadata++++
Send Without File Data++++
Send/Receive++
Inode Cache
Fallocate with Hole Punch++

12 Colophon

Thanks for using SUSE Linux Enterprise Server in your business.

The SUSE Linux Enterprise Server Team.

Print this page