Jump to content
SUSE Linux Enterprise Server 15 SP2

Release Notes

SUSE Linux Enterprise Server is a modern, modular operating system for both multimodal and traditional IT. This document provides a high-level overview of features, capabilities, and limitations of SUSE Linux Enterprise Server 15 SP2 and highlights important product updates.

These release notes are updated periodically. The latest version of these release notes is always available at https://www.suse.com/releasenotes (https://www.suse.com/releasenotes). General documentation can be found at https://documentation.suse.com/sles/15-SP2 (https://documentation.suse.com/sles/15-SP2).

Publication Date: 2020-07-29, Version: 15.2.20200729

1 About the Release Notes

These Release Notes are identical across all architectures, and the most recent version is always available online at https://www.suse.com/releasenotes (https://www.suse.com/releasenotes).

Entries can be listed twice, if they are important and belong to more than one section.

Release notes usually only list changes that happened between two subsequent releases. Certain important entries from the release notes of previous product versions are repeated. To make these entries easier to identify, they contain a note to that effect.

However, repeated entries are provided as a courtesy only. Therefore, if you are skipping one or more service packs, check the release notes of the skipped service packs as well. If you are only reading the release notes of the current release, you could miss important changes.

2 SUSE Linux Enterprise Server

SUSE Linux Enterprise Server 15 SP2 is a multimodal operating system that paves the way for IT transformation in the software-defined era. The modern and modular OS helps simplify multimodal IT, makes traditional IT infrastructure efficient and provides an engaging platform for developers. As a result, you can easily deploy and transition business-critical workloads across on-premise and public cloud environments.

SUSE Linux Enterprise Server 15 SP2, with its multimodal design, helps organizations transform their IT landscape by bridging traditional and software-defined infrastructure.

2.1 Interoperability and Hardware Support

Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix and Windows environments, supports open standard interfaces for systems management, and has been certified for IPv6 compatibility.

This modular, general purpose operating system runs on four processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering.

SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription. This makes it the perfect guest operating system for virtual computing.

2.2 What Is New?

2.2.1 General Changes in SLE 15

SUSE Linux Enterprise Server 15 introduces many innovative changes compared to SUSE Linux Enterprise Server 12. The most important changes are listed below.

Migration from openSUSE Leap to SUSE Linux Enterprise Server

Starting with SLE 15, we support migrating from openSUSE Leap 15 to SUSE Linux Enterprise Server 15. Even if you decide to start out with the free community distribution you can later easily upgrade to a distribution with enterprise-class support.

Extended Package Search

Use the new Zypper command zypper search-packages to search across all SUSE repositories available for your product even if they are not yet enabled. This functionality makes it easier for administrators and system architects to find the software packages needed. To do so, it leverages the SUSE Customer Center.

Software Development Kit

With SLE 15, the Software Development Kit is now integrated into the products. Development packages are packaged alongside regular packages. In addition, the Development Tools module contains tools for development.

RMT Replaces SMT

SMT (Subscription Management Tool) has been removed. Instead, RMT (Repository Mirroring Tool) now allows mirroring SUSE repositories and custom repositories. You can then register systems directly with RMT. In environments with tightened security, RMT can also proxy other RMT servers.

Major updates to the software selection:
Salt

SLE 15 SP2 can be managed via Salt, making it integrate better with modern management solutions, such as SUSE Manager.

Python 3

As the first enterprise distribution, SLE 15 offers full support for Python 3 development in addition to Python 2.

Directory Server

389 Directory Server replaces OpenLDAP as the LDAP directory service.

2.2.2 Changes in 15 SP2

SUSE Linux Enterprise Server 15 SP2 introduces changes compared to SUSE Linux Enterprise Server SP1. The most important changes are listed below.

Media Changes

The Unified Installer and Packages DVDs known from SUSE Linux Enterprise Server 15 SP1 are deprecated and have been replaced by the following media:

  • Online Installation Media: All SUSE Linux Enterprise 15 products can be installed with this stand alone media, after entering a registration key. The necessary packages are fetched from online repositories only. For information about available modules, see Section 3.1, “Modules in the SLE 15 SP2 Product Line”.

  • Full Installation Media: All SUSE Linux Enterprise Server 15 products can be installed without network connection with this media, for offline installation scenarios. The media contains all necessary packages. It consists of directories with module repositories which need to be added manually as needed. RMT (Repository Mirroring Tool) and SUSE Manager provide additional options for disconnected or managed installation.

Kernel

SLE 15 SP2 includes the Linux 5.3 kernel. This new kernel release includes upstream features such as 16 million additionally usable IPv4 addresses, utilization clamping support in the task scheduler, power-efficient userspace waiting with the umwait x86_64 instructions and many more.

Vagrant Boxes

SLES 15 SP2 and SLED 15 SP2 will be available as a Vagrant Box. You can obtain boxes for the following architectures:

  • SUSE Linux Enterprise Server:

    • x86_64: libvirt and VirtualBox

    • AArch64: libvirt

  • SUSE Linux Enterprise Desktop:

    • x86_64: libvirt and VirtualBox

For more information, see Section 5.13.7, “Vagrant”.

2.3 Important Sections of This Document

If you are upgrading from a previous SUSE Linux Enterprise Server release, you should review at least the following sections:

2.4 Security, Standards, and Certification

SUSE Linux Enterprise Server 15 SP2 has been submitted to the certification bodies for:

For more information about certification, see https://www.suse.com/security/certificates.html (https://www.suse.com/security/certificates.html).

2.5 Documentation and Other Information

2.5.1 Available on the Product Media

  • Read the READMEs on the media.

  • Get the detailed change log information about a particular package from the RPM (where FILENAME.rpm is the name of the RPM):

    rpm --changelog -qp FILENAME.rpm
  • Check the ChangeLog file in the top level of the media for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of the media of SUSE Linux Enterprise Server 15 SP2. This directory includes PDF versions of the SUSE Linux Enterprise Server 15 SP2 Installation Quick Start Guide.

2.5.2 Online Documentation

2.6 Support and Life Cycle

SUSE Linux Enterprise Server is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.

SUSE Linux Enterprise Server 15 has a 13-year life cycle, with 10 years of General Support and 3 years of Extended Support. The current version (SP2) will be fully maintained and supported until 6 months after the release of SUSE Linux Enterprise Server 15 SP3.

If you need additional time to design, validate and test your upgrade plans, Long Term Service Pack Support can extend the support duration. You can buy an additional 12 to 36 months in twelve month increments. This means, you receive a total of 3 to 5 years of support per Service Pack.

For more information, check our Support Policy page https://www.suse.com/support/policy.html (https://www.suse.com/support/policy.html) or the Long Term Service Pack Support Page https://www.suse.com/support/programs/long-term-service-pack-support.html (https://www.suse.com/support/programs/long-term-service-pack-support.html).

2.7 Support Statement for SUSE Linux Enterprise Server

To receive support, you need an appropriate subscription with SUSE. For more information, see https://www.suse.com/support/programs/subscriptions/?id=SUSE_Linux_Enterprise_Server (https://www.suse.com/support/programs/subscriptions/?id=SUSE_Linux_Enterprise_Server).

The following definitions apply:

L1

Problem determination, which means technical support designed to provide compatibility information, usage support, ongoing maintenance, information gathering and basic troubleshooting using available documentation.

L2

Problem isolation, which means technical support designed to analyze data, reproduce customer problems, isolate problem area and provide a resolution for problems not resolved by Level 1 or prepare for Level 3.

L3

Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.

For contracted customers and partners, SUSE Linux Enterprise Server is delivered with L3 support for all packages, except for the following:

SUSE will only support the usage of original packages. That is, packages that are unchanged and not recompiled.

2.7.1 General Support

To learn about supported features and limitations, refer to the following sections in this document:

2.7.2 Software Requiring Specific Contracts

Certain software delivered as part of SUSE Linux Enterprise Server may require an external contract. Check the support status of individual packages using the RPM metadata that can be viewed with rpm, zypper, or YaST.

Major packages and groups of packages affected by this are:

  • PostgreSQL (all versions, including all subpackages)

2.8 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE to provide glimpses into upcoming innovations. Technology previews are included for your convenience to give you a chance to test new technologies within your environment. We would appreciate your feedback! If you test a technology preview, please contact your SUSE representative and let them know about your experience and use cases. Your input is helpful for future development.

Technology previews come with the following limitations:

  • Technology previews are still in development. Therefore, they may be functionally incomplete, unstable, or in other ways not suitable for production use.

  • Technology previews are not supported.

  • Technology previews may only be available for specific hardware architectures. Details and functionality of technology previews are subject to change. As a result, upgrading to subsequent releases of a technology preview may be impossible and require a fresh installation.

  • Technology previews can be removed from a product at any time. This may be the case, for example, if SUSE discovers that a preview does not meet the customer or market needs, or does not comply with enterprise standards.

2.8.1 Technology Previews for All Architectures

  • Maven 3.6.2 has been added to SUSE Linux Enterprise Server 15 SP2 as a Technology Preview.

2.8.1.1 New Kernel Process Scheduling Variant

As a technology preview, SUSE Linux Enterprise Server 15 SP2 offers the new kernel variant kernel-preempt for latency-sensitive workloads. The settings of kernel-preempt support timely reaction to external events and precise timing at the cost of overall system throughput. This kernel variant is available for x86-64 and AArch64 hardware architectures.

2.8.2 Technology Previews for Arm 64-Bit (AArch64)

2.8.2.1 etnaviv Drivers for Vivante GPUs Have Been Added

The NXP* Layerscape* LS1028A/LS1018A System-on-Chip (SoC) contains a Vivante GC7000UL Graphics Processor Unit (GPU), and the NXP i.MX 8M SoC contains a Vivante GC7000L GPU.

As a technology preview, the SUSE Linux Enterprise Server for Arm 15 SP2 kernel includes etnaviv, a Display Rendering Infrastructure (DRI) driver for Vivante GPUs, and the Mesa-dri package contains a matching etnaviv_dri graphics driver library. Together they can avoid the need for third-party drivers and libraries.

To use them, the Device Tree passed by the bootloader to the kernel needs to include a description of the Vivante GPU for the kernel driver to get loaded. You may need to contact your hardware vendor for a bootloader firmware upgrade.

2.8.2.2 lima Driver for Arm Mali Utgard GPUs Has Been Added

The Xilinx* Zynq* UltraScale*+ MPSoC contains an Arm* Mali*-400 Graphics Processor Unit (GPU).

Previously, this GPU needed third-party drivers and libraries from your hardware vendor.

As a technology preview, the SUSE Linux Enterprise Server for Arm 15 SP2 kernel includes lima, a Display Rendering Infrastructure (DRI) driver for Mali Utgard microarchitecture GPUs, such as Mali-400, and the Mesa-dri package contains a matching lima_dri graphics driver library.

To use them, the Device Tree passed by the bootloader to the kernel needs to include a description of the Mali GPU for the kernel driver to get loaded. You may need to contact your hardware vendor for a bootloader firmware upgrade.

Note: The panfrost driver for Mali Midgard microarchitecture GPUs is available, too (Section 8.3.12, “Graphics Driver for Arm Mali Midgard Has Been Added”).

2.8.2.3 mali-dp Driver for Arm Mali Display Processors Has Been Added

The NXP* Layerscape* LS1028A/LS1018 System-on-Chip contains an Arm* Mali*-DP500 Display Processor.

As a technology preview, the SUSE Linux Enterprise Server for Arm 15 SP2 kernel includes mali-dp, a Display Rendering Manager (DRM) driver for Mali Display Processors. It has undergone only limited testing because it requires an accompanying physical-layer driver for DisplayPort* output (see Section 8.4.3, “No DisplayPort Graphics Output on NXP LS1028A and LS1018A”).

2.8.2.4 Btrfs Filesystem Has Been Enabled in U-Boot Bootloader

For Raspberry Pi* devices, SUSE Linux Enterprise Server for Arm 12 SP3 and later include Das U-Boot as bootloader, in order to align the boot process with other platforms. By default, it loads GRUB as UEFI application from a FAT-formatted partition, and GRUB then loads Linux kernel and ramdisk from a filesystem such as Btrfs.

As a technology preview, SUSE Linux Enterprise Server for Arm 15 SP2 adds a Btrfs driver to U-Boot for the Raspberry Pi (package u-boot-rpiarm64). This allows its commands ls and load to access files on Btrfs-formatted partitions on supported boot media, such as microSD and USB.

The new U-Boot command btrsubvol lists Btrfs subvolumes. For example:

U-Boot> btrsubvol mmc 0:3
ID 256 parent 5 name /@
ID 257 parent 256 name /@/.snapshots
ID 258 parent 257 name /@/.snapshots/1/snapshot
ID 272 parent 257 name /@/.snapshots/2/snapshot
ID 292 parent 257 name /@/.snapshots/21/snapshot
ID 293 parent 257 name /@/.snapshots/22/snapshot
ID 294 parent 257 name /@/.snapshots/23/snapshot
ID 297 parent 257 name /@/.snapshots/24/snapshot
ID 298 parent 257 name /@/.snapshots/25/snapshot
ID 300 parent 257 name /@/.snapshots/26/snapshot
ID 301 parent 257 name /@/.snapshots/27/snapshot
ID 302 parent 257 name /@/.snapshots/28/snapshot
ID 305 parent 257 name /@/.snapshots/29/snapshot
ID 306 parent 257 name /@/.snapshots/30/snapshot
ID 259 parent 256 name /@/home
ID 260 parent 256 name /@/opt
ID 261 parent 256 name /@/root
ID 262 parent 256 name /@/srv
ID 263 parent 256 name /@/tmp
ID 264 parent 256 name /@/var
ID 265 parent 256 name /@/usr/local
ID 266 parent 256 name /@/boot/grub2/arm64-efi
2.8.2.4.1 Recovering From a Deleted or Broken GRUB Executable

If GRUB (efi/boot/bootaa64.efi) on the FAT filesystem is damaged or deleted, the system will no longer automatically boot, and you may end up at a U-Boot command prompt.

You can now load kernel and ramdisk files from a Btrfs filesystem directly, within U-Boot.

Warning
Warning: Only Use for Disaster Recovery

This boot method should only be used temporarily for disaster recovery.

When GRUB is bypassed, any kernel command-line arguments that would normally be set as defaults by GRUB will not be set automatically. You need to manually specify them via the bootargs environment variable within U-Boot.

In particular this means the root partition will not be set automatically, and any arguments set in /etc/default/grub config file will be missing. This also affects any kernel command-line arguments you configured via YaST.

The following example assumes a 32 GB microSD card is inserted as boot medium. If you inserted the card while U-Boot was running, you will need to re-scan the devices first. You can then list the available MMC devices to obtain the index number of the SD card slot as opposed to the on-board SDIO interface:

U-Boot> mmc rescan
U-Boot> mmc list
mmcnr@7e300000: 1
emmc2@7e340000: 0 (SD)
Note
Note: Commands for USB Boot Media

If you are normally booting from USB mass storage device rather than SD card, you will find equivalent commands to the above, such as usb reset, usb tree and usb storage.

Just replace mmc 0 with usb and the index number these commands indicate for your USB device in the following examples then.

The partition layout of the SUSE Linux Enterprise Server for Arm 15 SP2 for the Raspberry Pi image is assumed, with combined Raspberry Pi bootloader and EFI system-partition first, followed by swap partition and root partition:

U-Boot> part list mmc 0

Partition Map for MMC device 0  --   Partition Type: DOS

Part    Start Sector    Num Sectors     UUID            Type
  1     2048            131072          abcdef01-01     0c
  2     133120          2048000         abcdef01-02     82
  3     2181120         60151080        abcdef01-03     83  1

1

This is the root partition, with partition number 3.

Next, obtain a unique identifier for your root partition, and set any needed kernel command-line arguments:

U-Boot> fsuuid mmc 0:3 myrootfs
U-Boot> env set bootargs "console=ttyS0,115200 console=tty0 root=UUID=$myrootfs" 1 2

1

This redirects kernel messages to the default UART serial pins and further output to a graphical screen. Change as necessary for your setup.

2

Specifying a device name, such as root=/dev/mmcblk0p3, is unreliable due to the probe order of kernel drivers possibly resulting in mmcblk1p3 instead. This will manifest as the ramdisk not finding the root filesystem and waiting indefinitely:

[*     ] A start job is running for dev-mmcblk0p3.device (1min 23s / unlimited)

Normally GRUB would use the filesystem’s Universally Unique Identifier (UUID) as root=UUID=01234567-89ab-cdef-0123456789ab, as shown in this example.

Alternatively, specify the partition’s UUID as root=PARTUUID=abcdef01-03. You can either copy it from the partition list output above, or store it in an environment variable myrootpart for usage as root=PARTUUID=$myrootpart like this:

U-Boot> part uuid mmc 0:3 myrootpart

Compare the chapter Using UUIDs to Mount Devices in the Storage Administration Guide at https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-uuid.html (https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-uuid.html).

Finally, browse for, then load kernel and ramdisk, and enter the boot command:

U-Boot> ls mmc 0:3 boot/ 1
<DIR>         92  Wed Mar 18 17:47:51 2020  grub2
<DIR>          0  Mon Mar 09 17:39:36 2020  efi
<DIR>        654  Tue Feb 11 22:42:27 2020  vc
<   >         11  Mon Mar 09 17:38:30 2020  mbrid
<   >       1725  Sun Apr 05 00:31:01 2020  boot.readme
<   >         65  Tue Apr 07 22:24:54 2020  .Image-5.3.18-12-default.hmac
<   >   25207280  Tue Apr 07 22:24:53 2020  Image-5.3.18-12-default
<   >    5007081  Tue Apr 07 20:02:24 2020  System.map-5.3.18-12-default
<   >     224106  Tue Apr 07 18:38:28 2020  config-5.3.18-12-default
<   >     394485  Tue Apr 07 20:36:53 2020  symvers-5.3.18-12-default.gz
<   >        207  Tue Apr 07 20:36:53 2020  sysctl.conf-5.3.18-12-default
<   >   10367190  Tue Apr 07 21:09:44 2020  vmlinux-5.3.18-12-default.gz
<SYM>         23  Tue Feb 11 22:44:11 2020  Image -> Image-5.3.18-12-default
<SYM>         24  Tue Feb 11 22:44:11 2020  initrd -> initrd-5.3.18-12-default
<   >    8998380  Sun Apr 12 12:40:09 2020  initrd-5.3.18-12-default
U-Boot> load mmc 0:3 $kernel_addr_r boot/Image 2
25207280 bytes read in 1666 ms (14.4 MiB/s)
U-Boot> load mmc 0:3 $ramdisk_addr_r boot/initrd
8998380 bytes read in 657 ms (13.1 MiB/s)
U-Boot> booti $kernel_addr_r $ramdisk_addr_r:$filesize $fdtcontroladdr 3

1

To load files from a Btrfs snapshot, instead of boot/ navigate to the snapshot directory, for example, .snapshots/42/snapshot/boot/. Use the btrsubvol command to obtain information on available snapshots.

2

Unless you modified them, the symbolic links Image and initrd will point to the files of the kernel-default or kernel-preempt package that you installed last. They can differ from the default GRUB menu entry, which instead defaults to the highest kernel version and can be overridden via /etc/default/grub config file or YaST. Uninstalling a kernel package can result in these symbolic links pointing to files no longer present.

Either check the target of the symbolic links via ls command before use, or better specify the full filename with desired version and flavor.

3

The use of the (hexadecimally-formatted) filesize environment variable relies on the ramdisk file having been loaded last.

Note
Note: Rescue System as Alternative

If neither local GRUB nor any of the installed and snapshotted kernels are bootable, you can try to load GRUB from a SUSE Linux Enterprise Server for Arm 15 SP2 installation medium and enter the Rescue System from there.

To temporarily change the boot order, for example, to USB and DHCP before SD:

U-Boot> env set boot_targets "usb0 dhcp mmc0"
U-Boot> boot

To fully-manually boot GRUB from your network, try something like this:

U-Boot> env set ipaddr 192.168.0.100
U-Boot> env set netmask 255.255.255.0
U-Boot> env set gatewayip 192.168.0.1
U-Boot> tftpboot $kernel_addr_r 192.168.0.2:path/to/bootaa64.efi
U-Boot> bootefi $kernel_addr_r $fdtcontroladdr

To restore the default boot method of using GRUB as UEFI application, run from the booted system as user root:

# update-bootloader --reinit

For more information, see section Boot Problems in the Administration Guide at https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-trouble.html#sec-trouble-boot (https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-trouble.html#sec-trouble-boot).

2.8.2.4.2 More Flexibility For Boot Scripts

In the default U-Boot environment, U-Boot boot.scr boot script files take precedence over UEFI efi/boot/bootaa64.efi files, such as GRUB, on a given partition. Assuming the partition layout of the SUSE Linux Enterprise Server for Arm 15 SP2 for the Raspberry Pi image, this means that a /boot/efi/boot.scr file (created with mkimage from u-boot-tools package) can run a custom boot script before or instead of GRUB.

You can now author boot scripts that load files also from a Btrfs filesystem.

Examples might include:

  • Chain-loading another boot script (source).

  • Importing environment variables from a text file (env import).

  • Applying Device Tree Overlays (fdt apply) before booting into GRUB.

    Note however that extraconfig.txt with dtoverlay= is recommended instead, see Section 8.2, “Boot and Driver Enablement for Raspberry Pi”.

2.8.3 Technology Previews for Intel 64/AMD64 (x86-64)

2.8.3.1 haltpoll Driver and Governor for Latency-Sensitive Virtual Guests Have Been Added

On bare-metal, a task waiting for a spinlock can use the mwait instruction to detect a change. This avoids an expensive Inter Processor Interrupt (IPI) when a waiting task must be woken. On virtual guests, mwait is difficult to emulate and IPIs are generally required (though this cost can be reduced with halt_poll_ns).

The SUSE Linux Enterprise Server 15 SP2 kernel for x86_64 includes haltpoll, a guest driver that polls a virtual CPU within the guest for an auto-tuned duration. It is introduced with the following support status:

  • Supported for SAP HANA on KVM use cases.

  • As a technology preview for all other use cases.

haltpoll improves the performance of some latency-sensitive, virtualized applications. haltpoll can only be used on physical hosts with a recent x86_64 CPU.

To use it:

  • On the physical host, the QEMU commands that starts the virtual machine has to contain the parameter -cpu host,kvm-hint-dedicated=on. virsh allows specifying this parameter using <hint-dedicated state='on'/> and <cpu mode='host-passthrough' check='none'/>. For more information, see the libvirt Documentation (https://libvirt.org/formatdomain.html#elementsFeatures).

  • Load the driver in the virtual host: modprobe cpuidle-haltpoll. If it cannot be loaded, check journalctl -k. If something went wrong, you may see an -ENODEV error.

If you are using libvirt/virsh, verify that the kvm-hint-dedicated parameter is actually passed to QEMU. There are two complimentary ways of checking whether the parameter is successfully applied:

  • On the host: Check the qemu command in the process list.

  • On the guest: Check whether the QEMU KVM parameter above is active with cpuid (from the package cpuid): If it is active, cpuid -1 -l 0x40000001 will show that the first bit of edx is set: edx=0x00000001.

2.8.3.2 Nested Virtualization in KVM

As a technology preview, KVM in SUSE Linux Enterprise Server 15 SP2 supports nested virtualization, that is, KVM guests running within other KVM guests. Nested virtualization has advantages in scenarios such as the following:

  • For managing own virtual machines directly with your hypervisor of choice in cloud environments.

  • For enabling the live migration of hypervisors and their guest virtual machines as a single entity.

  • For software development and testing.

For more information, see https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-vt-installation.html#sec-vt-installation-nested-vms (https://documentation.suse.com/sles/15-SP2/html/SLES-all/cha-vt-installation.html#sec-vt-installation-nested-vms).

3 Modules, Extensions, and Related Products

This section comprises information about modules and extensions for SUSE Linux Enterprise Server 15 SP2. Modules and extensions add functionality to the system.

3.1 Modules in the SLE 15 SP2 Product Line

The SLE 15 SP2 product line is made up of modules that contain software packages. Each module has a clearly defined scope. Modules differ in their life cycles and update timelines.

The modules available within the product line based on SUSE Linux Enterprise 15 SP2 at the release of SUSE Linux Enterprise Server 15 SP2 are listed in the Modules and Extensions Quick Start at https://documentation.suse.com/sles/15-SP2/html/SLES-all/art-modules.html (https://documentation.suse.com/sles/15-SP2/html/SLES-all/art-modules.html).

Not all SLE modules are available with a subscription for SUSE Linux Enterprise Server 15 SP2 itself (see the column Available for).

For information about the availability of individual packages within modules, see https://scc.suse.com/packages (https://scc.suse.com/packages).

3.2 Available Extensions

Extensions add extra functionality to the system and require their own registration key, usually at additional cost. Most extensions have their own release notes documents that are available from https://www.suse.com/releasenotes (https://www.suse.com/releasenotes).

The following extensions are available for SUSE Linux Enterprise Server 15 SP2:

The following extension is not covered by SUSE support agreements, available at no additional cost and without an extra registration key:

4 Installation and Upgrade

SUSE Linux Enterprise Server can be deployed in several ways:

  • Physical machine

  • Virtual host

  • Virtual machine

  • System containers

  • Application containers

4.1 Installation

This section includes information related to the initial installation of SUSE Linux Enterprise Server 15 SP2.

Important
Important: Installation Documentation

The following release notes contain additional notes regarding the installation of SUSE Linux Enterprise Server. However, they do not document the installation procedure itself.

For installation documentation, see the Deployment Guide at https://documentation.suse.com/sles/15-SP2/html/SLES-all/book-sle-deployment.html (https://documentation.suse.com/sles/15-SP2/html/SLES-all/book-sle-deployment.html).

4.1.1 New Media Layout

The set of media has changed with 15 SP2. There still are two different installation media, but the way they can be used has changed:

  • You can install with registration using either the online media (as with SUSE Linux Enterprise Server 15 SP1) or the full media.

  • You can install without registration using the full media. The installer has been added to the full media and the full media can now be used universally for all types of installations.

  • You can install without registration using the online media. Point the installer at the required SLE repositories, combining the install= and instsys= boot parameters:

    • With the install= parameter, select a path that contains either just the product repository or the full content of the media.

    • With the inst-sys= parameter, point at the installer itself, that is, /boot/ARCHITECTURE/root on the media.

    For more information about the parameters, see https://en.opensuse.org/SDB:Linuxrc#p_install (https://en.opensuse.org/SDB:Linuxrc#p_install).

4.1.2 Proposed Partition Table on Raspberry Pi

With previous versions of SUSE Linux Enterprise Server for Arm, when installing on a Raspberry Pi* machine with an SD card with no ESP/firmware partition, such a partition had to be created manually.

With 15 SP2, the installer makes sure to propose an MS-DOS type partition for the SD card. This new partitioning proposal can be used out-of-the-box without further changes.

4.1.3 Disabling UEFI Secure Boot with AutoYaST

By default, AutoYaST enables Secure Boot based on its availability and firmware settings. However, in some cases, it may be desirable to force disabling it unconditionally.

In this case, it is now possible to disable UEFI Secure Boot via the AutoYaST profile. For more information, see https://documentation.suse.com/sles/15-SP2/single-html/SLES-autoyast/ (https://documentation.suse.com/sles/15-SP2/single-html/SLES-autoyast/).

4.1.4 AutoYaST Support of Btrfs File Systems Spread over Multiple Devices

AutoYaST now supports Btrfs file systems that are spread over more than a single partition or device. This includes support for both cloning a system (to create a profile) and support for applying such a profile during an auto-installation.

4.1.5 Quickly Switch Boot Menu Language Between English and Chinese

The graphical menu of the installation media now contains a quick switch between English and Chinese. You can switch between these two languages using the F8 key.

4.2 Upgrade-Related Notes

This section includes upgrade-related information for SUSE Linux Enterprise Server 15 SP2.

Important
Important: Upgrade Documentation

The following release notes contain additional notes regarding the upgrade of SUSE Linux Enterprise Server. However, they do not document the upgrade procedure itself.

For upgrade documentation, see the Upgrade Guide at https://documentation.suse.com/sles/15-SP2/html/SLES-all/book-sle-upgrade.html (https://documentation.suse.com/sles/15-SP2/html/SLES-all/book-sle-upgrade.html).

4.2.1 Make Sure the Current System Is Up-To-Date Before Upgrading

Upgrading the system is only supported from the most recent patch level. Make sure the latest system updates are installed by either running zypper patch or by starting the YaST module Online-Update. An upgrade on a system that is not fully patched may fail.

4.2.2 Skipping Service Packs Requires LTSS

Skipping service packs during an upgrade is only supported if you have a Long Term Service Pack Support contract. Otherwise you need to first upgrade to SP1 before upgrading to SP2.

4.3 JeOS: Just Enough Operating System

SUSE Linux Enterprise Server JeOS is a slimmed-down form factor of SUSE Linux Enterprise Server that is ready to run in virtualization environments and the cloud. With SUSE Linux Enterprise Server JeOS, you can choose the right-sized SUSE Linux Enterprise Server option to fit your needs.

We are providing different virtual disk images for JeOS, using the .qcow2, .vhdx, and .vmdk file formats respectively, for KVM, Xen, OpenStack, Hyper-V, and VMware environments. All JeOS images set up the same disk size (24 GB) for the JeOS system. Due to the nature of the different image formats, the size of the JeOS image files differs.

4.3.1 JeOS Disks Are Now Mounted By UUID Instead of By Label

All SUSE Linux Enterprise Server JeOS (and openSUSE JeOS) image flavors now use the Mount by UUID setting for disks.

Switching to Mount by UUID for all JeOS images has the following benefits:

  • Matches the default setting of a regular SLE installation.

  • Uses the same setting for all SUSE Linux Enterprise Server JeOS (and openSUSE JeOS) images.

This change only affects the JeOS images based on 15 SP2. Previous images, even if upgraded or migrated to 15 SP2, are not affected (JeOS images upgraded or migrated to 15 SP2 will not change their mount by setting).

4.3.2 kiwi-templates-SLES15-JeOS Has Been Renamed to kiwi-templates-JeOS

We have simplified the name of the package that contains SUSE Linux Enterprise Server JeOS images templates to kiwi-templates-JeOS. This package is available in the Development Tools Module.

4.3.3 XEN JeOS Image Has Been Removed in Favor of kvm-and-xen JeOS Image

In previous releases, there were two JeOS images for Xen platforms:

  • For Xen full virtualization (HVM), there was the kvm-and-xen image.

  • For Xen paravirtualization (PV), there was a separate XEN image with different Linux kernel and GRUB2 packages.

Starting with SLES JeOS 15 SP2, the kvm-and-xen image should be used for both environments.

The XEN image does not exist anymore because it has become redundant:

  • Linux 4.4 and later include support for operating as a Xen-paravirtualized guest (DomU) as part of the default Linux kernel. The separate Xen flavor of the Linux kernel package has become unnecessary.

  • The GRUB2 version provided by the host system is responsible for booting the Virtual Machine. The GRUB2 packages specific to Xen paravirtualization are not needed anymore.

    As before, booting JeOS with Xen paravirtualization on a non-SUSE host (Dom0) is not supported.

4.4 For More Information

For more information, see Section 5, “General Features & Fixes” and the sections relating to your respective hardware architecture.

5 General Features & Fixes

Information in this section applies to all architectures supported by SUSE Linux Enterprise Server 15 SP2.

5.1 Authentication

5.1.1 389 Directory Server (389-ds) Administrative Tools

The various tools available in the lib389 package to administrate the 389-ds server are now available in SUSE Linux Enterprise Server.

5.2 Containers

5.2.1 Support for podman

Starting with SUSE Linux Enterprise Server 15 SP2, podman is a supported container engine. However, certain features of podman are currently not supported:

  • The varlink remote API

  • Rootless containers

  • Support for cgroups v2

  • Any CNI plugin other than default bridge plug-in

  • Automatic generation of systemd units via podman generate systemd

  • Pod management via podman pod …​ and podman play

  • The podman container diff command

5.3 Databases

5.3.1 MariaDB Has Been Updated to Version 10.4

MariaDB has been updated to version 10.4.

For information about changes between MariaDB 10.2 and 10.4, see the upstream release notes:

In addition, see the corresponding upgrade notes:

With MariaDB 10.4, we have introduced the following file and packaging changes:

  • There are mariadb variants of mysql binaries as symbolic links (mariadb-dumpslow is a symbolic link to mysqldumpslow, for example).

  • There is a rcmariadb link for compatibility.

  • libmysqld.so has been renamed to libmariadbd.so and its containing package libmysqld has been renamed to libmariadbd.

  • The systemd services mariadb.service and mariadb@.service have been enhanced.

  • The options innodb_file_format (the option was removed from MariaDB 10.3.1) and innodb_file_per_table=ON (now redundant) have been removed from my.cnf.

  • The option sql_mode has been removed from my.cnf (because NO_ENGINE_SUBSTITUTION and STRICT_TRANS_TABLES have been enabled by default since version 10.2.4).

  • MariaDB is now built with support for the Open Query GRAPH computation engine (OQGRAPH)

5.3.2 PostgreSQL 12 Has Been Added

PostgreSQL 12 has been added to SUSE Linux Enterprise Server. PostgreSQL 10 remains available in SUSE Linux Enterprise Server 15 SP2.

For information about changes between PostgreSQL 10 and 12, see the upstream release notes:

With PostgreSQL 12, there are the following packaging changes:

  • Functionality that was available in the package postgresql10-devel is now split into postgresql12-devel (for building database clients) and postgresql12-server-devel (for building server extensions).

  • There is a new optional package called postgresql12-llvmjit.

All new packages have an accompanying noarch package without a version number in its name, such as postgresql-server-devel and postgresql-llvmjit.

5.3.3 RabbitMQ Server 3.8.3 Has Been Added

RabbitMQ Server 3.8.3 has been added to SUSE Linux Enterprise Server 15 SP2. For more information about RabbitMQ, see the RabbitMQ releases notes (https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.3).

5.4 Development

5.4.1 jq Has Been Updated to Version 1.6

SUSE Linux Enterprise Server 15 SP2 now includes the JSON query tool jq in version 1.6. For more information about this release, see the upstream release notes (https://github.com/stedolan/jq/releases/tag/jq-1.6).

5.4.2 Supported Java Versions

The following Java implementations are available in SUSE Linux Enterprise Server 15 SP2:

Name (Package Name)VersionModuleSupport

OpenJDK (java-11-openjdk)

11

Base System

SUSE, L3, until 2025-06-30

OpenJDK (java-1_8_0-openjdk)

1.8.0

Legacy

SUSE, L3, until 2023-06-30

IBM Java (java-1_8_0-ibm)

1.8.0

Legacy

External, until 2025-04-30

5.4.3 LLVM Has Been Updated to Version 9

We have updated LLVM to version 9 to support new hardware. For more information, see the release notes of LLVM 9 (https://releases.llvm.org/9.0.0/docs/ReleaseNotes.html).

5.4.4 PHP Has Been Updated to Version 7.4

We upgraded PHP to version 7.4 to provide you with the latest release. To learn more about PHP version 7.4, we recommend reading the PHP release announcement (https://www.php.net/releases/7_4_0.php) and the 7.3.x to 7.4.x migration guide (https://www.php.net/manual/en/migration74.php).

5.4.5 Python 2 Is Deprecated

The python executable is only provided via the Python 2 module, not via the default repositories.

With SUSE Linux Enterprise Server 15 SP1, SUSE has started to phase out support for Python 2 in SLE. Within the standard distribution, only Python 3 (executable name python3) is available. Python 2 (executable names python2 and python) is only provided via the Python 2 SLE module. This module is disabled by default.

Python scripts usually expect the python executable (without a version number) to refer to the Python 2.x interpreter. If the Python 3 interpreter is started instead, this can lead to misbehaving applications. For this reason, SUSE has decided to not ship a symbolic link /usr/bin/python pointing to the Python 3 executable.

To run Python 2 scripts, make sure to enable the Python 2 module and install the package python.

5.5 Desktop

5.5.1 GNOME Desktop Update

The GNOME Desktop (and associated applications) has been updated to version 3.34 (from version 3.26). This updates brings many improvements, performance improvements, and new features. Among those, you might notice visual refreshes for a number of applications, including the desktop itself and the icon set, custom folders in application overview, redesign of various control panels, and a new on-screen keyboard.

5.5.2 Remote Desktop Packages Update

Various packages used for remote desktop have been updated: xrdp to 0.9.11 and xorgxrdp to 0.2.11.

5.5.3 Qt5 update

Qt5 libraries have been updated to latest 5.12 LTS branch.

5.5.4 Gstreamer Update

The Gstreamer multimedia framework has been updated to version 1.16.2. This version includes among various bug fixes and features, support for WebRTC.

5.5.5 libxml++ Support

Libxml++ libraries are available and supported in SUSE Linux Enterprise Server 15 SP2.

5.5.6 Use update-alternatives to Set Display Manager and Desktop Session

In SUSE Linux Enterprise Server 12 SP5 and earlier, you could use /etc/sysconfig or the YaST module /etc/sysconfig Editor to define the display manager (also called the login manager) and desktop session. Starting with SUSE Linux Enterprise Server 15 GA, the values are not defined using /etc/sysconfig anymore but with the alternatives system.

To change the defaults, use the following alternatives:

  • Display manager: default-displaymanager

  • Wayland session: default-waylandsession.desktop

  • X desktop session: default-xsession.desktop

For example, to check the value of default-displaymanager, use:

sudo update-alternatives --display default-displaymanager

To switch the default-displaymanager to xdm, use:

sudo update-alternatives --set default-displaymanager \
  /usr/lib/X11/displaymanagers/xdm

To enable graphical management of alternatives, use the YaST module Alternatives that can be installed from the package yast2-alternatives.

5.6 File Systems

5.6.1 Comparison of Supported File Systems

SUSE Linux Enterprise was the first enterprise Linux distribution to support journaling file systems and logical volume managers back in 2000. Later, we introduced XFS to Linux, which today is seen as the primary work horse for large-scale file systems, systems with heavy load and multiple parallel reading and writing operations. With SUSE Linux Enterprise 12, we went the next step of innovation and started using the copy-on-write file system Btrfs as the default for the operating system, to support system snapshots and rollback.

y supported

n unsupported

FeatureBtrfsXFSExt4OCFS 21

Supported in product

SLE

SLE

SLE

SLE HA

Data/metadata journaling

N/A2

n / y

y / y

n / y

Journal internal/external

N/A2

y / y

y / y

y / n

Journal checksumming

N/A2

y

y

y

Subvolumes

y

n

n

n

Offline extend/shrink

y / y

n / n

y / y

y / n3

Inode allocation map

B-tree

B+-tree

Table

B-tree

Sparse files

y

y

y

y

Tail packing

n

n

n

n

Small files stored inline

y (in metadata)

n

y (in inode)

y (in inode)

Defragmentation

y

y

y

n

Extended file attributes/ACLs

y / y

y / y

y / y

y / y

User/group quotas

n / n

y / y

y / y

y / y

Project quotas

n

y

y

n

Subvolume quotas

y

N/A

N/A

N/A

Data dump/restore

n

y

n

n

Block size default

4 KiB4

Maximum file system size

16 EiB

8 EiB

1 EiB

4 PiB

Maximum file size

16 EiB

8 EiB

1 EiB

4 PiB

1 OCFS 2 is fully supported as part of the SUSE Linux Enterprise High Availability Extension.

2 Btrfs is a copy-on-write file system. Instead of journaling changes before writing them in-place, it writes them to a new location and then links the new location in. Until the last write, the changes are not "committed". Because of the nature of the file system, quotas are implemented based on subvolumes (qgroups).

3 To extend an OCFS 2 file system, the cluster must be online but the file system itself must be unmounted.

4 The block size default varies with different host architectures. 64 KiB is used on POWER, 4 KiB on other systems. The actual size used can be checked with the command getconf PAGE_SIZE.

Additional Notes

Maximum file size above can be larger than the file system’s actual size because of the use of sparse blocks. All standard file systems on SUSE Linux Enterprise Server have LFS, which gives a maximum file size of 263 bytes in theory.

The numbers in the table above assume that the file systems are using a 4 KiB block size which is the most common standard. When using different block sizes, the results are different.

In this document:

  • 1024 Bytes = 1 KiB

  • 1024 KiB = 1 MiB;

  • 1024 MiB = 1 GiB

  • 1024 GiB = 1 TiB

  • 1024 TiB = 1 PiB

  • 1024 PiB = 1 EiB.

See also http://physics.nist.gov/cuu/Units/binary.html (http://physics.nist.gov/cuu/Units/binary.html).

Some file system features are available in SUSE Linux Enterprise Server 15 SP2 but are not supported by SUSE. By default, the file system drivers in SUSE Linux Enterprise Server 15 SP2 will refuse mounting file systems that use unsupported features (in particular, in read-write mode). To enable unsupported features, set the module parameter allow_unsupported=1 in /etc/modprobe.d or write the value 1 to /sys/module/MODULE_NAME/parameters/allow_unsupported. However, note that setting this option will render your kernel and thus your system unsupported.

5.6.2 Supported Btrfs Features

The following table lists supported and unsupported Btrfs features across multiple SLES versions.

y supported

n unsupported

FeatureSLES 11 SP4SLES 12 SP3SLES 12 SP4SLES 12 SP5SLES 15 GASLES 15 SP1SLES 15 SP2

Copy on Write

y

y

y

y

y

y

y

Free Space Tree (Free Space Cache v2)

n

n

n

n

n

y

y

Snapshots/Subvolumes

y

y

y

y

y

y

y

Swap Files

n

n

n

 

n

y

y

Metadata Integrity

y

y

y

y

y

y

y

Data Integrity

y

y

y

y

y

y

y

Online Metadata Scrubbing

y

y

y

y

y

y

y

Automatic Defragmentation

n

n

n

n

n

n

n

Manual Defragmentation

y

y

y

y

y

y

y

In-band Deduplication

n

n

n

n

n

n

n

Out-of-band Deduplication

y

y

y

y

y

y

y

Quota Groups

y

y

y

y

y

y

y

Metadata Duplication

y

y

y

y

y

y

y

Changing Metadata UUID

n

n

n

 

n

y

y

Multiple Devices

n

y

y

y

y

y

y

RAID 0

n

y

y

y

y

y

y

RAID 1

n

y

y

y

y

y

y

RAID 5

n

n

n

n

n

n

n

RAID 6

n

n

n

n

n

n

n

RAID 10

n

y

y

y

y

y

y

Hot Add/Remove

n

y

y

y

y

y

y

Device Replace

n

n

n

n

n

n

n

Seeding Devices

n

n

n

n

n

n

n

Compression

n

y

y

y

y

y

y

Big Metadata Blocks

n

y

y

y

y

y

y

Skinny Metadata

n

y

y

y

y

y

y

Send Without File Data

n

y

y

y

y

y

y

Send/Receive

n

y

y

y

y

y

y

Inode Cache

n

n

n

n

n

n

n

Fallocate with Hole Punch

n

y

y

y

y

y

y

5.7 Hardware

5.7.1 Pure Userspace X Drivers Are Not Supported Anymore

Under SLES 15 SP2 and later, only drivers with support for kernel mode-setting will continue to work. Pure userspace X drivers do not work anymore and are not supported. In particular, this affects the virtualization-related qxl and vmware drivers that will no longer work.

5.7.2 Support for Modes of Intel Optane DC Persistent Memory

With SUSE Linux Enterprise Server 15 SP2, Intel Optane DIMMs can be used in different modes on YES-certified platforms:

  • In App Direct Mode, the Intel Optane memory is used as fast persistent storage, an alternative to SSDs and NVMe devices. Data is persistent: It is kept when the system is powered off.

    App Direct Mode has been supported since SLE 12 SP4.

  • In Memory Mode, the Intel Optane memory serves as a cost-effective, high-capacity alternative to DRAM. In this mode, separate DRAM DIMMs act as a cache for the most frequently-accessed data while the Optane DIMMs memory provide large memory capacity. However, compared with DRAM-only systems, this mode is slower under random access workloads. If you run applications without Optane-specific enhancements that take advantage of this mode, memory performance may decrease. Data is not persistent: It is lost when the system is powered off.

    Memory Mode has been supported since SLE 15 SP1.

  • In Mixed Mode, the Intel Optane memory is partitioned, so it can serve in both modes simultaneously.

    Mixed Mode has been supported since SLE 15 SP1.

Not all certified platforms support all modes mentioned above. Direct hardware-related questions at your hardware partner. SUSE works with all major hardware vendors to make use of Intel Optane a perfect experience on the OS- and open-source infrastructure level.

5.7.3 Support for Pensando Ethernet Adapter Family

SUSE Linux Enterprise Server 15 SP2 includes a kernel driver for Pensando* PCI Ethernet controllers.

5.8 Kernel

5.8.1 Kernel Upgraded to Version 5.3

In alignment with the established cycle, the operating system kernel was upgraded to version 5.3 in this service pack.

5.8.2 Persistent Naming for SCSI Devices Is Now Active

On modern SUSE Linux Enterprise Server systems, device drivers are probed asynchronously. This has resulted in traditional device names like /dev/sda or eth0 becoming non-deterministic. To solve this, persistent device naming was introduced. For example, for network devices like eth0, udev would rename the interfaces to be consistent with device MAC addresses. For disk devices like /dev/sda, persistent device name links in /dev/disk/by-id have been introduced.

In the Linux 5.3 kernel, asynchronous probing has been added to more drivers. The sd driver handling SCSI disk devices uses asynchronous probing, too. The result is that /dev/sdX names even within individual SCSI hosts become non-deterministic.

You might now see a device order like:

# lsscsi

[0:2:0:0]    disk    Lenovo   RAID 930-8i-2GB  5.08  /dev/sdd
[0:2:1:0]    disk    Lenovo   RAID 930-8i-2GB  5.08  /dev/sdc
[0:2:2:0]    disk    Lenovo   RAID 930-8i-2GB  5.08  /dev/sde

When using persistent device names as recommended, this will not pose a problem. Modern tools are well-equipped to handle this.

However, some tools might expect a fixed device order or a predefined /dev/sdX name which then will fail to operate properly. In such cases, it is possible to disable asynchronous probing for specific drivers:

  1. Find out the name of the driver(s) used for the device that you want to disable asynchronous probing for:

    # lsscsi -H
    
    [0]    megaraid_sas

    The first number in the first column of the lsscsi output above corresponds to the SCSI host number.

  2. Add the following option to the kernel command line:

    scsi_mod.disable_async_probing=<driver>[,<driver>]

    Given the lsscsi output from the example above, you would add the following to the kernel command line: scsi_mod.disable_async_probing=megaraid_sas.

5.8.3 Booting Without Enabling Swap

If a swap device is not available and the system cannot enable it during boot, booting may fail completely.

To make such a system reliably bootable, you can disable the activation of swap devices. Append the following options on the kernel command line:

systemd.device_wants_unit=off systemd.mask=swap.target

This prevents activation of all swap units. You can also mask only specific swap units, for example:

systemd.mask=dev-sda1.swap

5.8.4 efivar Has Been Updated from v35 to v37

With efivar v37, installing the OS and booting from NVDIMM devices is now supported.

5.8.5 Kernel Real-time Group Scheduling Configuration Changed

The scheduler allows reserving runtime proportion to tasks with real-time priority. As an extension, the CONFIG_RT_GROUP_SCHED build option further allows distributing this real-time allocation among cgroups. However, there are limitations in the current mainline kernel implementation of this feature.

Aligned with upstream recommendations, SUSE Linux Enterprise Server kernel is now shipped with the CONFIG_RT_GROUP_SCHED build option disabled. The cpu.rt_runtime_us and cpu.rt_runtime_us CPU cgroup attributes have been removed.

5.8.6 Kernel Package Clean-up Reimplemented

Kernel-purge functionality has been integrated into zypper. The original /usr/sbin/purge-kernels script has been removed from the dracut package and replaced by the new zypper purge-kernels command. There is a new package purge-kernels-service that is responsible for running kernel package clean-up upon boot.

5.8.7 Reflink Support on XFS

Copy-on-write data extent sharing (reflinks), known from Btrfs, is now fully supported on XFS. This feature primarily allows for better storage space utilization.

Reflinks are available only on file systems formatted with the -m reflink=1 option of mkfs.xfs. The duperemove utility can be used for data deduplication on a reflink-enabled file system. Note that this feature is not yet compatible with file system DAX and is available on SUSE Linux Enterprise Server 15 SP2 and later. Earlier releases of SUSE Linux Enterprise Server can read reflink-enabled XFS file systems but not write them. Read-write mounts of reflink-enabled XFS file systems are strongly discouraged on these systems.

5.8.8 Support for squashfs Version 3.x Legacy Formats Has Been Removed From the Kernel

squashfs 3.x file systems could last be created with SLE 11 GA. Since SLE 11 SP1, the tools produce the squashfs 4.0 format. Until now, the SLE kernel included convenience code allowing to mount file systems with the old format. Starting with SLE 15 SP2, however, the operating system kernel supports only the squashfs 4.0 format.

To migrate a squashfs file system from squashfs 3.x to squashfs 4.0:

  1. Make sure the tool package squashfs is installed:

    sudo zypper in squashfs
  2. Unpack the old squashfs file into a local directory:

    unsquashfs -d squashfs-root rootfs.squashfs3
  3. Repack the directory contents using the squashfs 4.0 format:

    mksquashfs squashfs-root rootfs.squashfs4

5.8.9 Add Support for Large Increment per Cycle Events for AMD’s Family 17h and Up

The core AMD PMU has a 4-bit wide per-cycle increment for each performance monitor counter which works for most events. However, for AMD Zen CPU and later processors, some events can occur more than 15 times in a cycle. SUSE Linux Enterprise Server 15 SP2 adds the support for handling such Large Increment per Cycle Events.

5.8.10 EDAC (Error Detection and Correction) Support for AMD’s Family 19h

SUSE Linux Enterprise Server 15 SP2 adds code to support EDAC (Error Detection and Correction) for the new AMD Zen 3 CPU architecture.

5.8.11 Systemtap Updated to Version 4.2

Systemtap has been upgraded to version 4.2 to match the upgraded kernel.

5.8.12 The Legacy Microcode Loading Interface Has Been Removed

The legacy /dev/cpu/microcode microcode loading interface has been removed. SUSE Linux Enterprise Server updates CPU microcode during system boot through the early loading method in the initial system ramdisk.

Loading microcode at runtime is discouraged in most scenarios and should only be used when absolutely necessary. For example, when early microcode loading is impossible as CPU features enabled by the microcode fail to be detected properly by the rest of the running system.

5.8.13 Kernel Limits

This table summarizes the various limits which exist in our recent kernels and utilities (if related) for SUSE Linux Enterprise Server 15 SP2.

SLES 15 SP2 (Linux 5.3)AMD64/Intel 64 (x86_64)IBM Z (s390x)POWER (ppc64le)ARMv8 (AArch64)

CPU bits

64

64

64

64

Maximum number of logical CPUs

8192

256

2048

768

Maximum amount of RAM (theoretical/certified)

> 1 PiB/​64 TiB

10 TiB/​256 GiB

1 PiB/​64 TiB

256 TiB/​n.a.

Maximum amount of user space/kernel space

128 TiB/​128 TiB

n.a.

512 TiB1/​2 EiB

256 TiB/​256 TiB

Maximum amount of swap space

Up to 29 * 64 GB

Up to 30 * 64 GB

Maximum number of processes

1048576

Maximum number of threads per process

Upper limit depends on memory and other parameters (tested with more than 120,000)2.

Maximum size per block device

Up to 8 EiB on all 64-bit architectures

FD_SETSIZE

1024

1 By default, the user space memory limit on the POWER architecture is 128 TiB. However, you can explicitly request mmaps up to 512 TiB.

2 The total number of all processes and all threads on a system may not be higher than the "maximum number of processes".

5.9 Networking

In addition to the notes in this section, also see the following networking-related notes:

5.9.1 Apache: Support for Key IDs in SSLCertificateKeyFile in mod_ssl

The Apache module mod_ssl now includes a backported patch to enable support for Key IDs (via a PKCS#11 URL) in the SSLCertificateKeyFile Directive.

5.9.2 Samba

The version of Samba shipped with SUSE Linux Enterprise Server 15 SP2 delivers integration with Windows Active Directory domains. In addition, we provide the clustered version of Samba as part of SUSE Linux Enterprise High Availability Extension 15 SP2.

5.9.3 NFSv4

NFSv4 with IPv6 is only supported for the client side. An NFSv4 server with IPv6 is not supported.

5.9.4 More Services Support TLS 1.3 — TLS 1.1 and 1.0 Are No Longer Recommended for Use

The TLS 1.0 and 1.1 standards have been superseded by TLS 1.2 and TLS 1.3. TLS 1.2 has been available for considerable time now.

Packages using GnuTLS and Mozilla NSS already supported TLS 1.3 in earlier SUSE Linux Enterprise 15 service packs. Starting with SUSE Linux Enterprise 15 SP2, we also enable TLS 1.3 in packages that use OpenSSL.

We recommend no longer using TLS 1.0 and TLS 1.1, as SUSE plans to disable these protocols in a future service pack.

5.9.5 New GeoIP Database Sources

The GeoIP database allows approximately geo-locating users by their IP address. In the past, the company MaxMind made such data available for free in its GeoLite Legacy databases. On January 2, 2019, MaxMind discontinued the GeoLite Legacy databases, now offering only the newer GeoLite2 databases for download. To comply with new data protection regulation, since December 30, 2019, GeoLite2 database users are required to comply with an additional usage license. This change means users now need to register for a MaxMind account and obtain a license key to download GeoLite2 databases. For more information about these changes, see the MaxMind blog (https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/).

Previous versions of SUSE Linux Enterprise Server included the GeoIP package of tools that are only compatible with GeoLite Legacy databases. With SUSE Linux Enterprise Server 15 SP2, we introduce the following new packages to deal with the changes to the GeoLite service:

  • libmaxminddb: A library for working with the GeoLite2 format.

  • geoipupdate: The official Maxmind tool for downloading GeoLite2 databases. To use this tool, set up the configuration file with your MaxMind account details. This configuration file can also be generated on the Maxmind web page. For more information, see https://dev.maxmind.com/geoip/geoip2/geolite2/ (https://dev.maxmind.com/geoip/geoip2/geolite2/).

  • geolite2legacy: A script for converting GeoLite2 CSV data to the GeoLite Legacy format.

  • geoipupdate-legacy: A convenience script that downloads GeoLite2 data, converts it to the GeoLite Legacy format, and stores it in /var/lib/GeoIP. With this script, applications developed for use with the legacy geoip-fetch tool will continue to work.

The following SUSE Linux Enterprise Server packages use GeoIP data in the GeoLite2 format:

  • bind

  • nginx

  • wireshark

5.9.6 Intel* Omni-Path Architecture (OPA) Host Software

Intel Omni-Path Architecture (OPA) host software is fully supported in SUSE Linux Enterprise Server 15 SP2. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in clustered environments. For documentation about installing Intel Omni-Path Architecture, see: https://cdrdv2.intel.com/v1/dl/getContent/627110 (https://cdrdv2.intel.com/v1/dl/getContent/627110).

5.9.7 Wireshark Has Been Updated to Version 3.2

Wireshark has been updated from version 2.4, now at its end of life upstream, to version 3.2. The new version brings the following changes:

  • Added Support for new GeoIP databases (Section 5.9.5, “New GeoIP Database Sources”)

  • Added Support for 111 new protocols, including WireGuard, LoRaWAN, TPM 2.0, 802.11ax, and QUIC

  • Added Swedish and Ukrainian UI translations

  • Improved support for existing protocols such as HTTP/2

  • Improved analytics functionality

  • Improved usability

  • Security fixes

5.10 Security

5.10.1 AppArmor Has Been Updated to Version 2.13

AppArmor has been updated to version 2.13. The main new feature is a boot speed-up enabled by using precompiled profiles and caching.

For information about changes in AppArmor 2.13, see the AppArmor release notes (https://gitlab.com/apparmor/apparmor/-/wikis/Release_Notes_2.13)

5.11 Storage

This section comprises notes related to storage software that is not delivered as part of the Linux kernel. For features integrated into the kernel, see Section 5.8, “Kernel”.

5.11.1 Additional libstoragemgmt Plugins

Plugins for HP (hpsa) and LSI (megaraid) hardware are now available for the libstoragemgmt package.

5.11.2 Boot Will Succeed Even If Encrypted Network Devices Are Unavailable

In previous versions of SUSE Linux Enterprise Server, when a file system relied on an encrypted network block device (such as iSCSI), systemd would wait for the network block device before setting up networking. If the cryptsetup device could not be initialized, the boot process could fail, leaving you in emergency mode. This happened because there was no way to indicate in /etc/crypttab that a cryptsetup device requires network.

To prevent this scenario from happening, the new option _netdev has been introduced in /etc/crypttab. It can be used to mark cryptsetup devices as requiring network. Such devices will only be initialized after networking is available, similarly to /etc/fstab entries marked with _netdev. New versions of the YaST installer are aware of this option and will use it accordingly.

If your system was installed with an older version of the YaST installer, the option might be missing. In this case, add the option manually to relevant entries of your /etc/crypttab file.

5.11.3 LVM2 Has Been Updated to 2.03.05

LVM2 has been updated to version 2.03.05. To simplify the design, lvmetad was removed. Its functionality was consolidated into the improved LVM2 operations.

During boot, pvscan is now fast enough to handle a large number of disks without lvmetad. Native disk scanning is both reduced and synchronous (parallelized), which makes it comparable in performance (and often faster) when compared to LVM using lvmetad.

5.12 Systems Management

5.12.1 YaST NTP Client Module and systemd-timer

Starting with SUSE Linux Enterprise Server 15 SP2, the YaST module for NTP client configuration configures the systemd-timer (instead of the cron daemon) to execute cronie if it is not configured to run as a daemon and still performs a regular time sync.

5.12.2 Networking Technologies Removed from the YaST Network Module

The following networking technologies are no longer supported by the YaST module for network configuration:

  • PCMCIA

  • token ring

  • FDDI

  • myrinet

  • arcnet

  • xp (IA64-specific)

  • ESCON (IBM Z-specific)

5.12.3 YaST sysctl Settings Location

Starting with SUSE Linux Enterprise Server 15 SP2, YaST writes sysctl settings to a separate file called /etc/sysctl.d/70-yast.conf.

This helps reduce conflicts with applications that override system settings.

5.12.4 New Zypper Options

Unlike other Zypper commands, zypper download did not allow specifying the repository to download a package from.

With this release, it is possible to specify the repository the same way as with other commands using the --repo option (or its alias --from).

5.12.5 Changes for Snapper Plug-in for Zypper

The Snapper plug-in for Zypper has been rewritten from Python to C. This includes along a different implementation of regular expressions.

If you use regular expressions in /etc/snapper/zypp-plugin.conf, they may stop working correctly in some cases. This is true for regular expressions that rely on syntax that differs between the previous Python implementation and the new POSIX implementation.

In general, using wildcards instead of regular expressions is strongly recommended.

5.12.6 xfs_scrub_all Has Been Removed

The script for scrubbing all XFS file systems in the system has been removed from the distribution. The SLE kernel does not support XFS scrubbing, meaning it could only be used with a custom kernel.

5.12.7 Extended Package Search in YaST

The YaST software management module can only install packages from enabled modules or repositories. In the past, finding out which module needed to be enabled for a specific package could be tricky. In SLE 15 SP2, if the system is registered against SUSE Customer Center, the software management module you can now search for packages from disabled modules.

5.12.8 File Name of Archives Generated by supportconfig Has Changed

Archive files created by supportconfig now use the file name prefix scc_ instead of nts.

5.12.9 AutoYaST Will Not Propose Insecure Settings When Cloning a System

In past versions of SLE, when cloning a system, AutoYaST used relatively permissive settings for package signature handling. AutoYaST would accept unsigned files, files without checksum, and use unknown and untrusted GPG keys. These default settings could be a security risk.

In SLE 15 SP2, AutoYaST uses stricter signature handling settings by default. If needed, adjust those settings manually.

For more information, see the AutoYaST Guide (https://documentation.suse.com/sles/15-SP2/single-html/SLES-autoyast/#CreateProfile-General-signature).

5.12.10 Support for IP Filtering in systemd

Support for IP filtering, as described in http://0pointer.net/blog/ip-accounting-and-access-lists-with-systemd.html (http://0pointer.net/blog/ip-accounting-and-access-lists-with-systemd.html) is now available in systemd on SUSE Linux Enterprise Server 15 SP2.

5.12.11 Default TasksMax Limit Disabled in systemd

The TasksMax limit for both user and system slices has been removed because it caused issues with large database or JVM workloads.

5.12.12 Salt 3000

Salt has been upgraded to upstream version 3000, plus a number of patches, backports and enhancements by SUSE. In particular, CVE-2020-11651 and CVE-2020-11652 fixes are included in our release.

As part of this upgrade, cryptography is now managed by the Python-M2Crypto library (which is itself based on the well-known OpenSSL library).

We intend to regularly upgrade Salt to more recent versions.

For more details about changes in your manually-created Salt states, see the Salt upstream release notes 3000 (https://docs.saltstack.com/en/latest/topics/releases/3000.html)

Salt 3000 is the last version of Salt which will support the old syntax of the cmd.run module.

5.13 Virtualization

For more information about acronyms used below, see https://documentation.suse.com/sles/15-SP2/html/SLES-all/book-virt.html (https://documentation.suse.com/sles/15-SP2/html/SLES-all/book-virt.html).

5.13.1 Supported Host Environments (Hypervisors)

Support status of SUSE Linux Enterprise Server 15 running as a guest operating system on top of various virtualization hosts (hypervisors).

The following SUSE host environments are supported:

  • SLES 11 SP4: XEN and KVM

  • SLES 12 SP1 to SP5: XEN and KVM

  • SLES 15 GA to SP2: XEN and KVM

  • SLES 15 SP1 to SP2: WSL 2

The following third-party host environments are supported:

  • VMware ESXi 6.5, 6.7

  • Microsoft Windows 2008 R2 SP1+, 2012+, 2012 R2+, 2016, 2019

  • Citrix XenServer 7.0, 7.1, 8.0

  • Oracle VM 3.4

The level of support is as follows:

  • Support for SUSE host operating systems is full L3 (both for the guest and host) in accordance with the respective product life cycle (https://www.suse.com/lifecycle/).

  • SUSE provides full L3 support for SUSE Linux Enterprise Server guests within third-party host environments. Support for the host and cooperation with SUSE Linux Enterprise Server guests must be provided by the host system’s vendor.

5.13.2 Supported Guest Operating Systems

Support status of guest operating systems running virtualized on top of SUSE Linux Enterprise Server.

The following guest operating systems are fully supported (L3 in accordance with the respective product life cycle (https://www.suse.com/lifecycle/)):

  • SLES 11 SP4

  • SLES 12 SP1, SP2, SP3, SP4, SP5

  • SLES 15 GA, SP1, SP2

  • OES 11 SP2, 2015, 2015 SP1, 2018, 2018 SP1, 2018 SP2

  • Netware 6.5 SP8 (32-bit only)

  • Windows Server 2008 SP2+, 2008 R2 SP1+, 2012+, 2012 R2+, 2016, 2019

The following guest operating systems are supported as a technology preview (L2, fixes if reasonable):

  • SLED 15 SP1, SP2

The following Red Hat guest operating systems are supported on a best-effort basis for all customers (L2, fixes if reasonable) and fully supported for customers with Expanded Support (L3):

  • RHEL 5.11+, 6.9+, 7.7+, 8.0+

The following Microsoft guest operating systems are supported on a best-effort basis (L2, fixes if reasonable):

  • Windows 8+, 8.1+, 10+

All guest operating systems are supported both fully virtualized and paravirtualized, with the exception of Windows guests, which are only supported fully virtualized and OES and Netware guests, which are only supported paravirtualized.

All guest operating systems are supported both in 32-bit and 64-bit environments, unless stated otherwise (Netware).

5.13.3 Supported VM Migration Scenarios

SUSE Linux Enterprise Server supports migrating a virtual machine from one physical host to another.

5.13.3.1 Offline Migration Scenarios

SUSE Linux Enterprise Server supports offline migration (the VM needs to be shut down prior to the migration), from SLE 12 to SUSE Linux Enterprise Server 15 SP2. The following host operating system combinations are fully supported (L3 in accordance with the respective product life cycle (https://www.suse.com/lifecycle/)) for migrating guests from one host to another:

  • SLES 12 SP3 to SLES 12 SP4

  • SLES 12 SP3 to SLES 12 SP5

  • SLES 12 SP3 to SLES 15

  • SLES 12 SP4 to SLES 12 SP5

  • SLES 12 SP4 to SLES 15 (KVM only)

  • SLES 12 SP4 to SLES 15 SP1

  • SLES 12 SP5 to SLES 15 SP1

  • SLES 15 GA to SLES 15 SP1

  • SLES 15 GA to SLES 15 SP2

  • SLES 15 SP1 to SLES 15 SP2

5.13.3.2 Live Migration Scenarios

Support status of various live migration scenarios when running virtualized on top of SLES. Please also refer to the supported live migration requirements in the official Virtualization Guide (https://documentation.suse.com/sles/15-SP2/html/SLES-all/book-virt.html).

The following host operating system combinations are fully supported (L3 in accordance with the respective product life cycle (https://www.suse.com/lifecycle/)) for live-migrating guests from one host to another:

  • SLE 12 SP3 to SLE 12 SP4

  • SLE 12 SP4 to SLE 12 SP4

  • SLE 12 SP4 to SLE 12 SP5

  • SLE 12 SP4 to SLE 15 (KVM only)

  • SLE 12 SP4 to SLE 12 SP5

  • SLE 12 SP4 to SLE 15 SP1

  • SLE 12 SP5 to SLE 12 SP5

  • SLE 12 SP5 to SLE 15 SP1

  • SLE 15 GA to SLE 15 GA

  • SLE 15 GA to SLE 15 SP1

  • SLE 15 SP1 to SLE 15 SP1

  • SLE 15 SP1 to SLE 15 SP2

  • SLE 15 SP2 to SLE 15 SP2

Note
Note: Xen Live Migration

Live migration between SLE 11 and SLE 12 is not supported because of the different toolstacks, see the SLES 12 release notes (https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-317306) for more details.

5.13.4 KVM

5.13.4.1 Important Changes
  • Added support for Intel Cooper Lake CPUs

  • Stop using system memory barriers as this is a blocker for using QEMU in the context of containers. (SUSE now builds the package with --disable-membarrier.)

  • Support for SDL is dropped, add obsoletes directive for qemu-audio-sdl and qemu-ui-sdl.

  • Qemu updated to 4.2 version (http://wiki.qemu.org/ChangeLog/4.2 (http://wiki.qemu.org/ChangeLog/4.2)).

5.13.4.2 KVM Limits

Supported (and tested) virtualization limits of a SUSE Linux Enterprise Server 15 host running Linux guests on x86-64. For information about KVM limits on SUSE Linux Enterprise Server for Arm, see Section 8.3.2, “KVM Virtual CPU Limit Increased”. For other operating systems, refer to the specific vendor.

Virtual Machine Limits
  • Maximum virtual CPUs per VM: 288

  • Maximum memory per VM: 4 TiB

Host Limits

5.13.5 Xen

5.13.5.1 Update to Xen 4.13

Xen has been updated to version 4.13. Among others, this update contains the following new features:

5.13.5.2 Xen Limits

With SUSE Linux Enterprise Server 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.

Virtual Machine Limits
  • Maximum virtual CPUs per VM: 128 (HVM), 64 (HVM Windows guest) or 512 (PV)

  • Maximum memory per VM: 2 TB (64bit guest), 16 GB (32-bit guest with PAE)

Host Limits

5.13.6 libvirt

Libvirt has been updated to version 6.0.x (https://www.libvirt.org/news.htm (https://www.libvirt.org/news.htm)).

5.13.6.1 Important Changes
  • Removed the --listen option from LIBVIRTD_ARGS in /etc/sysconfig/libvirtd, because it is incompatible with socket activation.

  • Added the --timeout option for consistency with upstream.

  • libvirtd now supports systemd socket activation. libvirtd.socket and libvirtd-ro.socket are now enabled along with libvirtd.service. libvirtd will still start at boot in case there are guests that need to be autostarted, but it will exit after --timeout xxx seconds of inactivity. systemd will start it again when there are connections on the sockets.

  • Added TSX_CTRL and TAA_NO bits for IA32_ARCH_CAPABILITIES MSR (CVE-2019-11135).

  • Added SLE 15 and SLE 12 service pack support to virt-create-rootfs.

  • Added support for parallel migration, which allows memory pages to be processed in parallel by several threads and sent to the destination host using several connections at the same time (virsh migrate vm-name --live --parallel --parallel-connections 2).

  • Xen: Added support for the credit2 scheduler parameters (see https://wiki.xenproject.org/wiki/Credit2_Scheduler (https://wiki.xenproject.org/wiki/Credit2_Scheduler) for more information)

  • Xen: libvirtd shutdowns will now be inhibited when domains are running

5.13.6.2 osinfo-db Has Been Updated
  • osinfo-db now supports more guests.

  • The hwdata package now provides up-to-date information on usb.ids and pci.ids. Prior to version 1.7.x, libosinfo included its own, outdated copies of this information.

5.13.6.3 spice-gtk PulseAudio Back-end Is Deprecated

The PulseAudio back-end of spice-gtk is deprecated and will be removed in a future release.

5.13.7 Vagrant

Vagrant (https://www.vagrantup.com/) is a tool that provides a unified workflow for the creation, deployment and management of virtual development environments. It provides an abstraction layer for various virtualization providers (like VirtualBox, VMWare or libvirt) via a simple configuration file that allows developers and operators to quickly spin up a VM running Linux or any other operating system.

A new VM can be launched with Vagrant via the following set of commands. The example uses the Vagrant Box for openSUSE Tumbleweed:

vagrant init opensuse/Tumbleweed.x86_64
vagrant up
# your box is now going to be downloaded and started
vagrant ssh
# and now you've got ssh access to the new VM
5.13.7.1 Vagrant Boxes for SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop

Starting with SUSE Linux Enterprise Server 15 SP2, we are providing official Vagrant Boxes for SUSE Linux Enterprise Server and SUSE Linux Enterprise Desktop for x86_64 and AArch64 (only for SLES using the libvirt provider). These boxes come with the bare minimum of packages to reduce their size and are not registered, thus users need to register the boxes prior to further provisioning.

These boxes are only available for direct download from https://download.suse.com (https://download.suse.com). Therefore, downloaded boxes must be registered manually with Vagrant as follows:

vagrant box add --name SLES-15-SP2 SLES15-SP2-Vagrant.x86_64-15.2-libvirt-*.vagrant.libvirt.box

The box is then available under the name SLES-15-SP2 and can be used like other Vagrant boxes:

vagrant init SLES-15-SP2
vagrant up
vagrant ssh
5.13.7.2 AArch64 Support

The SUSE Linux Enterprise Server box is also available for the AArch64 architecture using the libvirt provider. It has been pre-configured for the usage on SUSE Linux Enterprise Server on AArch64 and might not launch on other operating systems without additional settings. Running it on architectures other than AArch64 is not supported.

In case the box fails to start with a libvirt error message, add the following to your Vagrantfile and adjust the variables according to the guest operating system:

  config.vm.provider :libvirt do |libvirt|
    libvirt.driver = "kvm"
    libvirt.host = 'localhost'
    libvirt.uri = 'qemu:///system'
    libvirt.host = "master"
    libvirt.features = ["apic"]
    # path to the UEFI loader for aarch64
    libvirt.loader = "/usr/share/qemu/aavmf-aarch64-code.bin"
    libvirt.video_type = "vga"
    libvirt.cpu_mode = "host-passthrough"
    libvirt.machine_type = "virt-3.1"
    # path to the qemu aarch64 emulator
    libvirt.emulator_path = "/usr/bin/qemu-system-aarch64"
  end

5.13.8 AMD SEV tools

SUSE has worked with AMD to improve the AMD SEV Tool (https://github.com/AMDESE/sev-tool (https://github.com/AMDESE/sev-tool)).

5.13.9 Others

5.13.9.1 Improved Windows Subsystem for Linux (WSL) Images for SUSE Linux Enterprise Server

Starting with SUSE Linux Enterprise Server 15 SP2, we are providing official WSL 2 images for SUSE Linux Enterprise Server for x86-64. You can find all SUSE images in the Microsoft store at https://www.microsoft.com/en-us/search/shop/Apps?q=%22SUSE+Linux+Enterprise%22 (https://www.microsoft.com/en-us/search/shop/Apps?q=%22SUSE+Linux+Enterprise%22).

The new images are made possible by using the latest WSL-DistroLauncher. Images are now fully built in the Open Build Service (OBS), using the native WSL support of KIWI.

The SUSE Linux Enterprise Server image for Windows Subsystem for Linux (WSL) now uses yast2-firstboot instead of the first-boot wizard provided by upstream. This means the initial setup now has the SUSE look and feel.

5.13.10 open-vm-tools Has Been Updated to 11.1.x

Version 11.1.x of open-vm-tools has a new subpackage for a Service Discovery Plugin called open-vm-tools-sdmp. For more information, see the upstream release notes (https://github.com/vmware/open-vm-tools/blob/stable-11.1.x/ReleaseNotes.md).

6 POWER-Specific Features & Fixes (ppc64le)

Information in this section applies to SUSE Linux Enterprise Server for POWER 15 SP2.

6.1 Speed of ibmveth Interface Not Reported Accurately

The ibmveth interface is a paravirtualized interface. When communicating between LPARs within the same system, the interface’s speed is limited only by the system’s CPU and memory bandwidth. When the virtual Ethernet is bridged to a physical network, the interface’s speed is limited by the speed of that physical network.

Unfortunately, the ibmveth driver has no way of determining automatically whether it is bridged to a physical network and what the speed of that link is. ibmveth therefore reports its speed as a fixed value of 1 Gb/s which in many cases will be inaccurate. To determine the actual speed of the interface, use a benchmark. Using ethtool, you can then set a more accurate displayed speed.

7 IBM Z-Specific Features & Fixes (s390x)

Information in this section applies to SUSE Linux Enterprise Server for IBM Z and LinuxONE 15 SP2. For more information, see https://www.ibm.com/developerworks/linux/linux390/documentation_suse.html (https://www.ibm.com/developerworks/linux/linux390/documentation_suse.html)

7.1 Hardware

7.1.1 Support for IBM z15 in binutils, glibc and gdb

Binutils, glibc and gdb have been updated to support instructions introduced with IBM z15.

7.1.2 Compression Improvements for zlib

The zlib library has been updated to exploit the IBM z15 compression capabilities.

7.1.3 Compression Improvements for gzip

The gzip tool has been updated to exploit the IBM z15 compression capabilities.

7.1.4 Performance Counters for IBM z15 (CPU-MF)

For optimized performance tuning, the CPU-measurement counter facility now supports counters, including the MT-diagnostic counter set, which was originally introduced with IBM z14.

7.2 Network

7.2.1 qeth: Support for HiperSockets Multi-Write

Multi-Write allows transferring multiple 64 KB buffers with a single instruction. This reduces CPU utilization, speeds up data transfer, and reduces receiver-side interrupts.

7.2.2 Degraded Performance on RoCE ConnectX-4 Hardware

Using default settings of SUSE Linux Enterprise Server 15 SP1 and 15 SP2, the performance of RoCE ConnectX-4 hardware on IBM z14 and IBM z15 systems is degraded compared to when used under SUSE Linux Enterprise Server 15 GA.

To improve performance to the same level as with SUSE Linux Enterprise Server 15 GA, set the following flag for all RoCE ethernet interfaces: ethtool --set-priv-flags DEVNAME rx_striding_rq. This needs to be done for each RoCE interface and at each boot.

7.3 Performance

7.3.1 CPU-MF/perf: Export Sampling Data for Post-Processing

Enhances the hardware sampling in the perf PMU driver to export additional information for improved perf tool post-processing. Displays the address and function name from where a sample was taken.

7.4 Security

7.4.1 Support for SHA3 via CPACF (MSA6)

Support for hardware acceleration in the kernel for the SHA3 algorithm (CPACF MSA6) on CPACF hardware.

7.4.2 New Tool zcryptstats to Extract Crypto Measurement Data

Added a new tool zcryptstats to the s390-tools package to obtain and display measurement data from crypto adapters for capacity planning.

7.4.3 openCryptoki: Exploit PRNO Pseudo-Random Numbers in ICA, CCA and EP11 Tokens

Support for a NIST compliant pseudo-random number generator that can be seeded with true random numbers based on CPACF functions for the NIST curves P256, P384, and P521.

7.4.4 Support for AES Cipher Keys in pkey and paes Modules and zkey

The generation and transformation of AES cipher keys is now supported in the pkey and paes modules and zkey.

7.4.5 Support for the Crypto Express7S Crypto Card

Added support for the new IBM z15 Crypto Express7S crypto card.

7.4.6 openCryptoki: Support for SHA*-RSA_PKCS_PSS Mechanisms in libica Token

Added support for the following mechanisms to the libica token of openCryptoki: CKM_SHA256_RSA_PKCS_PSS, CKM_SHA384_RSA_PKCS_PSS, CKM_SHA512_RSA_PKCS_PSS.

7.4.7 libica: Support for Elliptic Curve Cryptography (ECC) via CPACF MSA9

Use functions provided by IBM z15 with CPACF MSA9 to implement, for example, EC key generation (PCC) and ECDSA sign/verify functions (including the Ed25510, Ed448, X25519, and X448 curves).

7.4.8 openssl-ibmca: Support for Elliptic Curve Cryptography (ECC) via CPACF MSA9

Use functions provided by IBM z15 with CPACF MSA9 to implement, for example, EC key generation (PCC) and ECDSA sign/verify functions (including the Ed25510, Ed448, X25519, and X448 curves).

7.4.9 zkey: Enhanced Consistency Checks

Various zkey enhancements have been added based on customer experiences, including checks to ensure that all HSMs used to encrypt a volume use the same master keys.

7.4.10 openSSL: Support for Elliptic Curve Cryptography (ECC) via CPACF MSA9

Use functions provided by IBM z15 with CPACF MSA9 to implement, for example, EC key generation (PCC) and ECDSA sign/verify functions (including the P256, P384, P521, Ed25510, Ed448, X25519 and X448 curves).

7.4.11 Kernel Address Space Layout Randomization (KASLR)

With kernel address space layout randomization (KASLR), the kernel can be loaded to a random location in memory. This offers protection against certain security attacks that rely on knowledge of the kernel addresses.

7.4.12 Installer Enhancements for Encrypting Partitions.

The installer has been enhanced to support dm-crypt to use protected keys to encrypt partitions.

7.4.13 Support for Secure Execution

SUSE Linux Enterprise Server 15 SP2 supports the IBM Secure Execution for Linux technology introduced with IBM z15 and LinuxONE III. For more information, see https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lxse/lxse_t_secureexecution.html (https://www.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lxse/lxse_t_secureexecution.html).

7.5 Storage

7.5.1 zdsfs: Online VTOC Refresh

A Linux application can now access new data sets that were created after zdsfs was mounted without the need to remount zdsfs.

7.5.2 Installer Support for I/O Device Pre-Configuration

YaST now allows the user to process device configuration data obtained from the IBM Dynamic Partition Manager at boot time.

7.5.3 Split DIF and DIX Boot Time Controls

Enables the user to separately configure DIF and DIF+DIX integrity protection mechanisms for zFCP-attached SCSI devices.

7.6 Virtualization

The following new features are supported in SUSE Linux Enterprise Server 15 SP2 under KVM:

7.6.1 New CPU Model IBM z14 ZR1

Provide the CPU model for the IBM z14 ZR1 to enable KVM guests to transparently exploit new hardware features on the z14 ZR1.

7.6.2 New CPU Model IBM z15

Provide the CPU model for the IBM z15 to enable KVM guests to transparently exploit new hardware features on the z15.

7.6.3 libvirt CPU Model Comparison and Baselining APIs for IBM Z

Enabled APIs for libvirt CPU model comparison and baselining for IBM Z that will allow customers to optimize utilization of heterogeneous KVM host environments.

7.6.4 DASD Passthrough Support

Enables KVM guests to directly access ECKD DASD devices using CCW passthrough. This allows the exploiting advanced features like HyperPAV and reserve/release. IPL from DASD is provided as a technology preview in SUSE Linux Enterprise Server 15 SP2.

7.6.5 Interrupt Support for Crypto Passthrough

Added interrupt support to improve performance and CPU utilization.

7.6.6 Secure Linux Boot Toleration

Linux operating system images using a secure boot on-disk format can now be run in KVM without modifications required, lowering overall administrative overhead.

8 Arm 64-Bit-Specific Features & Fixes (AArch64)

Information in this section applies to SUSE Linux Enterprise Server for Arm 15 SP2.

8.1 System-on-Chip Driver Enablement

SUSE Linux Enterprise Server for Arm 15 SP2 includes driver enablement for the following System-on-Chip (SoC) chipsets:

  • AMD* Opteron* A1100

  • Ampere* X-Gene*, eMAG*, Altra*

  • AWS* Graviton, Graviton2

  • Broadcom* BCM2837/BCM2710, BCM2711

  • Fujitsu* A64FX

  • Huawei* Kunpeng* 916, Kunpeng 920

  • Marvell* ThunderX*, ThunderX2*, ThunderX3*; OCTEON TX*; Armada* 7040, Armada 8040

  • Mellanox* BlueField*

  • NVIDIA* Tegra* X1, Tegra X2

  • NXP* i.MX 8M; Layerscape* LS1027A/LS1017A, LS1028A/LS1018A, LS1043A, LS1046A, LS1088A, LS2080A/LS2040A, LS2088A, LX2160A

  • Qualcomm* Centriq* 2400

  • Rockchip RK3399

  • Socionext* SynQuacer* SC2A11

  • Xilinx* Zynq* UltraScale*+ MPSoC

Note
Note

Driver enablement is done as far as available and requested. Refer to the following sections for any known limitations.

Some systems might need additional drivers for external chips, such as a Power Management Integrated Chip (PMIC), which may differ between systems with the same SoC chipset.

For booting, systems need to fulfill either the Server Base Boot Requirements (SBBR) or the Embedded Base Boot Requirements (EBBR), that is, the Unified Extensible Firmware Interface (UEFI) either implementing the Advanced Configuration and Power Interface (ACPI) or providing a Flat Device Tree (FDT) table. If both are implemented, the kernel will default to the Device Tree; the kernel command line argument acpi=force can override this default behavior.

Check for SUSE YES! certified systems, which have undergone compatibility testing.

8.2 Boot and Driver Enablement for Raspberry Pi

Bootloaders, drivers and a supported microSD card image of SUSE Linux Enterprise Server for Arm 15 SP2 for Raspberry Pi* devices are available. The template of the SUSE Linux Enterprise Server for Arm 15 SP2 for the Raspberry Pi image is available as profile RaspberryPi in the package kiwi-templates-JeOS to derive custom appliances.

8.2.1 New Features

In addition to the Raspberry Pi Compute Module 3, the Compute Module 3+ is now also supported. It uses the BCM2837 System-on-Chip silicon revision B0, same as Raspberry Pi 3 Model B+.

Also enabled is the Raspberry Pi 3 Model A+, with BCM2837 B0 silicon revision. Compared to Model B+ it offers a reduced feature set and less RAM.

Initial enablement is provided for Raspberry Pi 4 Model B, which uses a new BCM2711 System-on-Chip. Some limitations apply.

In the provided U-Boot bootloader the Btrfs filesystem is now enabled, offering additional flexibility for partitioning, scripting and recovery (Section 2.8.2.4, “Btrfs Filesystem Has Been Enabled in U-Boot Bootloader”).

Starting with SUSE Linux Enterprise Server for Arm 15 SP1, the .iso installation media allow booting directly from USB storage devices on supported boards, such as Raspberry Pi 3 Model B+. The Unified Installer in 15 SP2 now simplifies installation from USB to microSD by offering a default partitioning proposal for a bootable installation target, avoiding the need for manual partitioning in the most common scenarios. For more details on the boot process please refer to the SUSE Linux Enterprise Server Deployment Guide.

8.2.2 Upgrade Considerations

The bootloader package u-boot-rpi3 has been replaced with a new u-boot-rpiarm64 package that covers both Raspberry Pi 3 and 4 generations.

The package containing the Kiwi templates for the image has been renamed from kiwi-templates-SLES15-JeOS to kiwi-templates-JeOS (Section 4.3.2, “kiwi-templates-SLES15-JeOS Has Been Renamed to kiwi-templates-JeOS).

If you upgraded from the SUSE Linux Enterprise Server for Arm 12 SP3 image for the Raspberry Pi, an X11 config file /etc/X11/xorg.conf.d/20-kms.conf will be left over, which disables acceleration. You can safely remove this file in later versions (see Section 8.4.5, “No 3D Graphics Acceleration for Broadcom VideoCore IV”).

8.2.3 Expansion Boards

Raspberry Pi 3 Model A+ and B/B+ as well as Raspberry Pi 4 Model B all offer a 40-pin General Purpose I/O connector, with multiple software-configurable functions such as UART, I²C and SPI. This pin mux configuration along with any external devices attached to the pins is defined in the Device Tree which is passed by the bootloader to the kernel.

SUSE does not currently provide support for any particular Hardware Attached on Top (HATs) or other expansion boards attached to the 40-pin GPIO connector. However, insofar as drivers for pin functions and for attached chipsets are included in SUSE Linux Enterprise, they can be used. SUSE does not provide support for making changes to the Device Tree, but successful changes will not affect the support status of the operating system itself. Be aware that errors in the Device Tree can stop the system from booting successfully or can even damage the hardware.

The bootloader and firmware in SUSE Linux Enterprise Server for Arm 15 SP2 support Device Tree Overlays. The recommended way of configuring GPIO pins is to create a file extraconfig.txt on the FAT volume (/boot/efi/extraconfig.txt in the SUSE image) with a line dtoverlay=filename-without-.dtbo per Overlay. For more information about the syntax, see the documentation by the Raspberry Pi Foundation: https://www.raspberrypi.org/documentation/configuration/device-tree.md (https://www.raspberrypi.org/documentation/configuration/device-tree.md)

If not already shipped in the /boot/efi/overlays/ directory (installed by raspberrypi-firmware-dt package), .dtbo files can be obtained from the manufacturer of the HAT or compiled from self-authored sources.

8.2.4 For More Information

For more information, see the Raspberry Pi Quick Start guide at https://documentation.suse.com/sles/15-SP2/html/SLES-rpi-quick/ (https://documentation.suse.com/sles/15-SP2/html/SLES-rpi-quick/).

8.3 New Features

8.3.1 Logical CPU Limit Increased

The SUSE Linux Enterprise Server for Arm 15 SP2 kernel raises the limit of logical CPUs (physical cores and threads) from 480 to 768 CPUs.

8.3.2 KVM Virtual CPU Limit Increased

The number of virtual CPUs (vCPUs) that can be assigned to a guest with the Kernel-based Virtual Machine (KVM) is limited by the interrupt controllers emulated by QEMU for the guest virtual machine (VM). SUSE Linux Enterprise Server for Arm 15 SP1 and earlier were therefore unable to assign all the logical host CPUs of a Marvell* ThunderX2* to a single guest VM, affecting the YES! System Test Kit.

SUSE Linux Enterprise Server for Arm 15 SP2 raises the limit from 123 to 512 virtual CPUs per KVM guest VM.

Note
Note

Please observe any recommendations regarding vCPU over-commit in relation to physical cores (compare Section 5.13.4.2, “KVM Limits”).

8.3.3 NUMA Node Limit Increased

The SUSE Linux Enterprise Server for Arm 15 SP2 kernel raises the limit of Non-Uniform Memory Access (NUMA) nodes from 4 to 64 nodes.

8.3.4 KVM Enablement for Armv8.2-SVE Scalable Vector Extensions

The Armv8.2 Scalable Vector Extensions (SVE) were initially enabled in the SUSE Linux Enterprise Server for Arm 15 SP1 kernel. The Kernel-based Virtual Machine (KVM) in SUSE Linux Enterprise Server for Arm 15 SP2 is now enabled for SVE, too.

8.3.5 Driver Enablement for Arm Neoverse N1

The Arm* Neoverse* N1 CPU core is enabled since a SUSE Linux Enterprise Server for Arm 15 SP1 kernel maintenance update.

Note
Note

While the Neoverse N1 CPU core is enabled, the Arm Neoverse N1 SDP system is not enabled (Section 8.4.1, “Arm Neoverse N1 SDP Not Supported”).

8.3.6 Driver Enablement for Ampere Altra

SUSE Linux Enterprise Server for Arm 15 SP2 adds enablement for servers based on Ampere* Altra* System-on-Chip (SoC). The Altra SoC uses Arm* Neoverse* N1 CPU cores (Section 8.3.5, “Driver Enablement for Arm Neoverse N1”).

8.3.7 Driver Enablement for AWS Graviton

Amazon EC2* A1 instances based on the AWS* Graviton System-on-Chip (SoC) are enabled since a SUSE Linux Enterprise Server for Arm 15 kernel maintenance update. The AWS Graviton SoC uses Arm* Neoverse* Cosmos (Cortex*-A72) CPU cores.

A1.metal instances are enabled since a SUSE Linux Enterprise Server for Arm 15 SP1 kernel maintenance update.

8.3.8 Driver Enablement for AWS Graviton2

Amazon EC2* instances based on the AWS* Graviton2 System-on-Chip (SoC), such as M6g, are enabled since a SUSE Linux Enterprise Server for Arm 15 SP1 kernel maintenance update. The AWS Graviton2 SoC uses Arm* Neoverse* N1 CPU cores (Section 8.3.5, “Driver Enablement for Arm Neoverse N1”).

8.3.9 Driver Enablement for Marvell ThunderX3

SUSE Linux Enterprise Server for Arm 15 SP2 adds enablement for servers based on Marvell* ThunderX3* System-on-Chip (SoC). The ThunderX3 SoC uses custom cores compliant with Armv8.3+ and the Server Base System Architecture (SBSA).

8.3.10 Driver Enablement for NVIDIA Jetson

SUSE Linux Enterprise Server for Arm 15 SP2 adds enablement for the NVIDIA* Jetson* TX1 and Jetson Nano System-on-Module (SoM) using the NVIDIA Tegra* X1 System-on-Chip (SoC), as well as for the Jetson TX2/TX2i SoM using the Tegra X2 SoC.

The SUSE Linux Enterprise Server for Arm installation media are known not to boot with the bootloaders based on U-Boot v2016.07 of NVIDIA L4T R32.3.1 and earlier. When loading GRUB from the SUSE installation medium and selecting a kernel from the GRUB menu, then after loading kernel and ramdisk, you will encounter this error:

EFI stub: Booting Linux Kernel...
EFI stub: EFI_RNG_PROTOCOL unavailable, no randomness supplied
EFI stub: ERROR: Could not determine UEFI Secure Boot status.
EFI stub: Using DTB from configuration table
EFI stub: ERROR: Failed to install memreserve config table!
EFI stub: Exiting boot services and installing virtual address map...
EFI stub: ERROR: Failed to update FDT and exit boot services

Contact your hardware vendor or NVIDIA as the module vendor for how to obtain and flash an EBBR 1.0-compliant bootloader for use with SUSE Linux Enterprise Server for Arm.

8.3.11 Driver Enablement for NXP LS1028A

SUSE Linux Enterprise Server for Arm 15 SP2 adds initial enablement for the NXP* Layerscape* LS1028A family of System-on-Chip (SoC) chipsets. LS1027A is a SoC variant without the 3D Graphics Processor Unit (GPU), DisplayPort* PHY and LCD controller found on LS1028A SoC. LS1017A and LS1018A are single-core variants of dual-core LS1027A and LS1028A SoCs respectively.

Known limitations for LS1028A and LS1018A built-in graphics are detailed in Section 8.4.3, “No DisplayPort Graphics Output on NXP LS1028A and LS1018A”. As a consequence, the Display Processor driver (mali-dp) and the 3D GPU driver (etnaviv) are available as technology previews (Section 2.8.2.3, “mali-dp Driver for Arm Mali Display Processors Has Been Added” and Section 2.8.2.1, “etnaviv Drivers for Vivante GPUs Have Been Added”).

The FlexCAN driver (flexcan) was not ready yet for the LS1028A SoC family (Section 8.4.4, “No CAN Interfaces on NXP Layerscape”).

8.3.12 Graphics Driver for Arm Mali Midgard Has Been Added

The Rockchip RK3399 System-on-Chip contains an Arm* Mali*-T864 Graphics Processor Unit (GPU).

Previously, this GPU needed third-party drivers and libraries from your hardware vendor.

The SUSE Linux Enterprise Server for Arm 15 SP2 kernel includes panfrost, a Display Rendering Infrastructure (DRI) driver for Arm Mali Midgard microarchitecture GPUs, such as Mali-T864, and the Mesa-dri package contains a matching panfrost_dri graphics driver library.

To use them, the Device Tree passed by the bootloader to the kernel needs to include a description of the Mali GPU for the kernel driver to get loaded. You may need to contact your hardware vendor for a bootloader firmware upgrade.

Note
Note

The lima driver for Mali Utgard microarchitecture GPUs is available as a technology preview (Section 2.8.2.2, “lima Driver for Arm Mali Utgard GPUs Has Been Added”).

8.3.13 SPI Driver for Socionext SynQuacer Has Been Added

The SUSE Linux Enterprise Server for Arm 15 SP2 kernel adds a Serial Peripheral Interface (SPI) driver for the Socionext* SynQuacer* SC2A11 System-on-Chip.

8.4 Known Limitations

8.4.1 Arm Neoverse N1 SDP Not Supported

The Arm* Neoverse* N1 System Development Platform (SDP) contains a custom System-on-Chip, whose PCIe controller has multiple known errata.

The SUSE Linux Enterprise Server for Arm 15 SP2 kernel does not include any quirks to handle these errata, leading to bus faults when enumerating the PCIe root complex. Installation from SUSE Linux Enterprise Server for Arm 15 SP2 media is therefore not possible.

For more information, see https://git.linaro.org/landing-teams/working/arm/arm-reference-platforms.git/about/docs/n1sdp/pcie-support.rst (https://git.linaro.org/landing-teams/working/arm/arm-reference-platforms.git/about/docs/n1sdp/pcie-support.rst).

Note that this is not an issue with the Neoverse N1 CPU core itself (Section 8.3.5, “Driver Enablement for Arm Neoverse N1”). Other Neoverse N1 based System-on-Chip chipsets are not affected (see Section 8.3.8, “Driver Enablement for AWS Graviton2” and Section 8.3.6, “Driver Enablement for Ampere Altra”).

8.4.2 No CPU Frequency Scaling on Fujitsu A64FX

Servers based on the Fujitsu* A64FX System-on-Chip do not support Collaborative Processor Performance Control (CPPC). This means the CPUs will always run at maximum performance, irrespective of their load.

Contact your hardware vendor for whether third-party drivers are available for SUSE Linux Enterprise Server for Arm 15 SP2.

8.4.3 No DisplayPort Graphics Output on NXP LS1028A and LS1018A

The NXP* Layerscape* LS1028A/LS1018A System-on-Chip contains an Arm* Mali*-DP500 Display Processor, whose output is connected to a DisplayPort* TX Controller (HDP-TX) based on Cadence* High Definition (HD) Display Intellectual Property (IP).

A Display Rendering Manager (DRM) driver for the Arm Mali-DP500 Display Processor is available as technology preview (Section 2.8.2.3, “mali-dp Driver for Arm Mali Display Processors Has Been Added”).

However, there was no HDP-TX physical-layer (PHY) controller driver ready yet. Therefore no graphics output will be available, e.g., on the DisplayPort* connector of the NXP LS1028A Reference Design Board (RDB).

Contact the chip vendor NXP for whether third-party graphics drivers are available for SUSE Linux Enterprise Server for Arm 15 SP2.

Alternatively, contact your hardware vendor for whether a bootloader update is available that implements graphics output, allowing to instead use efifb framebuffer graphics in SUSE Linux Enterprise Server for Arm 15 SP2.

Note
Note

The Vivante GC7000UL GPU driver (etnaviv) is available as a technology preview (Section 2.8.2.1, “etnaviv Drivers for Vivante GPUs Have Been Added”).

8.4.4 No CAN Interfaces on NXP Layerscape

The NXP* Layerscape* LS1028A and LX2160A System-on-Chip families both contain multiple FlexCAN Controller Area Network (CAN) controllers.

The SUSE Linux Enterprise Server for Arm 15 SP2 kernel does not include the flexcan driver.

8.4.5 No 3D Graphics Acceleration for Broadcom VideoCore IV

The vc4 Display Rendering Manager (DRM) driver for the Broadcom* VideoCore* IV Graphics Processor Unit (GPU) has not reached enterprise-grade stability expectations for 3D acceleration on Raspberry Pi* 3 devices under memory pressure.

Since SUSE Linux Enterprise Server for Arm 15 (and 12 SP4) the Mesa-dri package does not include the vc4_dri graphics library, forcing fallback to 2D-only accelerated graphics.

8.5 Deprecation of Early Marvell ThunderX2 Silicon Support

Marvell* ThunderX2* System-on-Chip silicon revisions Ax had errata for the SATA controller. Silicon revisions B0 and later are not affected.

SUSE Linux Enterprise Server for Arm 12 SP3 and later include kernel patches with a recommended workaround. This allowed evaluation of early server systems with the affected silicon revisions.

An upcoming version of SUSE Linux Enterprise Server for Arm will drop the patches with those workarounds. Production servers should not be affected by that change. For early systems with pre-production silicon please check with the hardware vendor whether CPU upgrade kits are available.

8.6 Btrfs Subvolume for /boot/grub2/arm64-efi Missing After System Upgrade

In case you upgraded an AArch64 system with a Btrfs root file system from SUSE Linux Enterprise Server for Arm 12 SP3 or 15 GA, a subvolume for /boot/grub2/arm64-efi is missing. This will result in boot failures when trying to boot the system from an old snapshot:

error: symbol grub_efi_allocate_pages not found
Note
Note

SUSE Linux Enterprise Server for Arm 15 GA for the Raspberry Pi is not affected by this.

Manually add the missing subvolume to make sure snapshots work correctly. Note that snapshots created before the fix remain unbootable. Boot the system from the default (that is, latest) snapshot and follow these steps:

  1. Create the missing Btrfs subvolume, move the architecture-specific GRUB files into it and configure it to be mounted on subsequent boots. Run the following command as user root:

    # mksubvolume /boot/grub2/arm64-efi
  2. Update the GRUB bootloader to ensure the new subvolume configuration goes into effect:

    # update-bootloader --reinit

    Newly created snapshots will now be aware of the new subvolume and thus bootable.

9 Removed and Deprecated Features and Packages

This section lists features and packages that were removed from SUSE Linux Enterprise Server or will be removed in upcoming versions.

9.1 Removed Features and Packages

The following features and packages have been removed in this release.

9.2 Deprecated Features and Packages

The following features and packages are deprecated and will be removed with a future service pack of SUSE Linux Enterprise Server.

  • NodeJS 8 is deprecated and will be removed with SUSE Linux Enterprise Server 15 SP3.

  • Support for System V init.d scripts is deprecated and will be removed with the next major version of SUSE Linux Enterprise Server. systemd will automatically convert System V init.d scripts to service files (using systemd-sysv-generator). To convert to systemd service files permanently, use the generated files directly from systemctl.

    In consequence, this deprecation also causes the following changes:

    • The /etc/init.d/halt.local initscript is deprecated. Use systemd service files instead.

    • rcSERVICE controls of systemd services are deprecated. Use systemd service files instead.

    • insserv.conf is deprecated.

10 Obtaining Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at https://www.suse.com/products/server/download/ (https://www.suse.com/products/server/download/) on Medium 2. For up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Send requests by e-mail to sle_source_request@suse.com (mailto:sle_source_request@suse.com). SUSE may charge a reasonable fee to recover distribution costs.

Print this page