SUSE Linux Enterprise Server 12 SP1

Release Notes

This document provides guidance and an overview to high level general features and updates for SUSE Linux Enterprise Server 12 SP1. Besides architecture or product-specific information, it also describes the capabilities and limitations of SLES 12 SP1.

If you are skipping one or more service packs, check the release notes of the skipped service packs as well. Release notes usually only list changes that happened between two subsequent releases. If you are only reading the release notes of the current release, you could miss important changes.

General documentation can be found at: http://www.suse.com/documentation/sles-12/.

Publication Date: 2017-06-05, Version: 12.1.20170602
1 SUSE Linux Enterprise Server
1.1 What's New?
1.2 Documentation and Other Information
1.3 How to Obtain Source Code
1.4 Support Statement for SUSE Linux Enterprise Server
1.5 Derived and Related Products
1.6 Security, Standards, and Certification
2 Installation and Upgrade
2.1 Creation of Snapshots During the Upgrade Process
2.2 Installation
2.3 Upgrade-Related Notes
2.4 For More Information
3 Infrastructure, Package and Architecture Specific Information
3.1 Architecture Independent Information
3.2 Systems Management
3.3 Performance Related Information
3.4 Storage
3.5 Virtualization
4 AMD64/Intel64 64-Bit (x86_64) Specific Information
4.1 Virtualization
5 POWER (ppc64le) Specific Information
5.1 Starting X After Upgrading to SLES 12 SP1
5.2 SystemTap Probes Support on ppc64le
5.3 Virtual Ethernet: Large Send / Receive Offload for ibmveth
5.4 Container support for Docker on IBM Power
5.5 vmalloc address translation support in makedumpfile for ppc64le arch
5.6 YaST Support to Configure Firmware-assisted Dump for ppc64le
6 System z (s390x) Specific Information
6.1 Hardware
6.2 Virtualization
6.3 Storage
6.4 Network
6.5 Security
6.6 Reliability, Availability, Serviceability (RAS)
6.7 Performance
6.8 Miscellaneous
7 Driver Updates
7.1 Other Drivers
8 Packages and Functionality Changes
8.1 New Packages
8.2 Updated Packages
8.3 Removed and Deprecated Functionality
8.4 Changes in Packaging and Delivery
8.5 Modules
9 Technical Information
9.1 Virtualization: Network Devices Supported
9.2 Virtualization: Devices Supported for Booting
9.3 Virtualization: Supported Disks Formats and Protocols
9.4 Kernel Limits
9.5 KVM Limits
9.6 Xen Limits
9.7 File Systems
10 Legal Notices
11 Colophon

1 SUSE Linux Enterprise Server

SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. It is an affordable, interoperable, and manageable open source foundation. With it, enterprises can cost-effectively deliver core business services, enable secure networks, and simplify the management of their heterogeneous IT infrastructure, maximizing efficiency and value.

The only enterprise Linux recommended by Microsoft and SAP, SUSE Linux Enterprise Server is optimized to deliver high-performance mission-critical services, as well as edge of network, and web infrastructure workloads.

Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix as well as Windows environments, supports open standard interfaces for systems management, and has been certified for IPv6 compatibility.

This modular, general purpose operating system runs on three processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering.

SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription, making it the perfect guest operating system for virtual computing.

SUSE Linux Enterprise Server is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.

SUSE Linux Enterprise Server 12 has a 13 years life cycle, with 10 years of General Support and 3 years of Extended Support. The current version (SP1) will be fully maintained and supported until 6 months after the release of SUSE Linux Enterprise Server 12 SP2. If you need additional time to design, validate and test your upgrade plans, Long Term Service Pack Support can extend the support you get an additional 12 to 36 months in twelve month increments, giving you a total of 3 to 5 years of support on any given service pack.

For more information, check our Support Policy page https://www.suse.com/support/policy.html or the Long Term Service Pack Support Page https://www.suse.com/support/programs/long-term-service-pack-support.html.

1.1 What's New?

SUSE Linux Enterprise Server 12 introduces a number of innovative changes. Here are some of the highlights:

  • Robustness on administrative errors and improved management capabilities with full system rollback based on Btrfs as the default file system for the operating system partition and SUSE's Snapper technology.

  • An overhaul of the installer introduces a new workflow that allows you to register your system and receive all available maintenance updates as part of the installation.

  • SUSE Linux Enterprise Server Modules offer a choice of supplemental packages, ranging from tools for Web Development and Scripting, through a Cloud Management module, all the way to a sneak preview of SUSE's upcoming management tooling called Advanced Systems Management. Modules are part of your SUSE Linux Enterprise Server subscription, are technically delivered as online repositories, and differ from the base of SUSE Linux Enterprise Server only by their life cycle.

  • New core technologies like systemd (replacing the time honored System V based init process) and wicked (introducing a modern, dynamic network configuration infrastructure).

  • The open source database system MariaDB is fully supported now.

  • Support for the open-vm-tools together with VMware for better integration into VMware based hypervisor environments.

  • Linux Containers are integrated into the virtualization management infrastructure (libvirt). Docker is provided as a fully supported technology. For more details, see https://www.suse.com/promo/sle/docker/.

  • Support for the 64-bit Little-Endian variant of IBM's POWER architecture, in addition to continued support for the Intel 64 / AMD64 and IBM System z architectures.

  • GNOME 3.10 (or just GNOME 3), giving users a modern desktop environment with a choice of several different look and feel options, including a special SUSE Linux Enterprise Classic mode for easier migration from earlier SUSE Linux Enterprise desktop environments

  • For users wishing to use the full range of productivity applications of a Desktop with their SUSE Linux Enterprise Server, we are now offering the SUSE Linux Enterprise Workstation Extension (needs a SUSE Linux Enterprise Desktop subscription).

  • Integration with the new SUSE Customer Center, SUSE's central web portal to manage Subscriptions, Entitlements, and provide access to Support.

If you are upgrading from a previous SUSE Linux Enterprise Server release, you should review at least the following sections:

1.1.1 SMT: Supported Products

SMT (Subscription Management Tool) now supports the SLE 10, 11, and 12 product families.

1.2 Documentation and Other Information

1.2.1 Available on the Product Media

  • Read the READMEs on the media.

  • Get the detailed change log information about a particular package from the RPM (where <FILENAME>.rpm is the name of the RPM):

    rpm --changelog -qp <FILENAME>.rpm
  • Check the ChangeLog file in the top level of the media for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of the media of SUSE Linux Enterprise Server 12 SP1. This directory includes PDF versions of the SUSE Linux Enterprise Server 12 SP1 Installation Quick Start and Deployment Guides. Documentation (if installed) is available below the /usr/share/doc/ directory of an installed system.

  • These Release Notes are identical across all architectures, and the most recent version is always available online at http://www.suse.com/releasenotes/. Some entries are listed twice, if they are important and belong to more than one section.

1.2.2 Externally Provided Documentation

1.3 How to Obtain Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.

1.4 Support Statement for SUSE Linux Enterprise Server

To receive support, customers need an appropriate subscription with SUSE; for more information, see http://www.suse.com/products/server/services-and-support/.

1.4.1 General Support Statement

The following definitions apply:

L1

Problem determination, which means technical support designed to provide compatibility information, usage support, on-going maintenance, information gathering and basic troubleshooting using available documentation.

L2

Problem isolation, which means technical support designed to analyze data, duplicate customer problems, isolate problem area and provide resolution for problems not resolved by Level 1 or alternatively prepare for Level 3.

L3

Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.

For contracted customers and partners, SUSE Linux Enterprise Server 12 SP1 and its Modules are delivered with L3 support for all packages, except the following:

  • Technology Previews

  • sound, graphics, fonts and artwork

  • packages that require an additional customer contract

  • packages provided as part of the Software Development Kit (SDK)

SUSE will only support the usage of original (e.g., unchanged or un-recompiled) packages.

1.4.1.1 Docker Orchestration Is Not Supported

Starting with Docker 1.12, orchestration (swarm) is now a part of the Docker engine, as available from the SLES Containers module. This feature is not supported.

1.4.1.2 Wayland Libraries Are Not Supported on SLES 12 GA and SP1

Wayland is not supported on SLES 12 GA and SLES 12 SP1. While some Wayland libraries are available, they should not be installed and are not supported by SUSE.

1.4.1.3 Support for Korn Shell (ksh) Extended Until 2022

Support for the legacy package ksh in SLE was originally slated to end in 2017. However, many customers still depend on ksh.

Support for ksh has been extended until the end of 2022.

Beyond that time, you can use the mksh implementation of Korn Shell (package mksh). However, as mksh is based on pdksh, there are certain functional differences. For example, its handling of pipelines is similar to Bash.

1.4.1.4 L3 support for OpenJDK

OpenJDK is now L3-supported.

1.4.2 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE. These features are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are mainly included for customer convenience and give customers a chance to test new technologies within an enterprise environment.

Whether a technical preview will be moved to a fully supported package later, depends on customer and market feedback. A technical preview does not automatically result in support at a later point in time. Technical previews could be dropped at any time and SUSE is not committed to provide a technical preview later in the product cycle.

Give your SUSE representative feedback, including your experience and use case.

1.4.2.1 Virtual Machine Sandbox

virt-sandbox provides a way for the administrator to run a command under a confined virtual machine using qemu/kvm or LXC libvirt virtualization drivers. The default sandbox domain only allows applications the ability to read and write stdin, stdout, and file descriptors handed to it. It is not allowed to open any other files. Enable SELinux on your system to get it usable. For more information, see http://sandbox.libvirt.org/ (http://sandbox.libvirt.org/quickstart/#System_service_sandboxes).

1.4.2.2 KVM Nested Virtualization

KVM Nested Virtualization is available in SLE 12 as a technology preview. For more information about nested virtualization, see nested-vmx.txt (https://github.com/torvalds/linux/blob/master/Documentation/virtual/kvm/nested-vmx.txt).

1.4.2.3 Support LibStorageMgmt to Manage Storage Hardware

LibStorageMgmt was introduced in 2012, and offers an open infrastructure for modules, which even can include (closed source) vendor specific tools.

Provide LibStorageMgmt, a library for storage management, to integrate better with storage vendors. It is an infrastructure to facilitate management automation, ease of use and take advantage of storage vendor supported features, which improve storage performance and space utilization.

1.4.2.4 Technology Previews: System z (s390x)

1.4.2.4.1 Technology Preview: Extended CPU Performance Metrics in HYPFS for Linux z/VM Guests

The HYPFS is extended to provide the "diag 0C data" also for Linux z/VM guests that distinguish "management time" spent as part of CPU load.

1.4.2.4.2 Technology Preview: LPAR Watchdog for IBM z Systems

SLES 12 SP1 contains an enhanced watchdog driver for IBM z Systems. The driver can be used for Linux running on logical partitions (LPARs) and Linux running as a guest under z/VM. The update provides automatic reboot and automatic dump capabilities if a Linux system becomes unresponsive.

1.4.3 Software Requiring Specific Contracts

The following packages require additional support contracts to be obtained by the customer in order to receive full support:

  • PostgreSQL Database

1.5 Derived and Related Products

1.6 Security, Standards, and Certification

SUSE Linux Enterprise Server 12 SP1 has been submitted to the certification bodies for:

For more information about certification, refer to https://www.suse.com/security/certificates.html.

2 Installation and Upgrade

SUSE Linux Enterprise Server can be deployed in several ways:

  • Physical machine

  • Virtual host

  • Virtual machine

  • System containers

  • Application containers

2.1 Creation of Snapshots During the Upgrade Process

The upgrade process is safer than ever. When upgrading using the DVD, if snapshots are enabled and the root file system is Btrfs, two new snapshots will be created:

  • One before any change is made to the system.

  • Another one just after the upgrade process is finished.

If something goes wrong, the system can be easily restored to a known state using Snapper.

2.2 Installation

2.2.1 AutoYaST Uses Wrong crashkernel Value During First Boot

If you install using AutoYaST with a profile that, in the <kdump> section, has <add_crash_kernel> set to true but that does not include <crash_kernel>, an incorrect kernel parameter will be set initially. During the first boot, that is, the second AutoYaST stage, the following will be set:

crashkernel=""

AutoYaST will correct this automatically. However, the corrected value will only go into effect after a reboot.

To solve this, reboot the system. Afterwards, Kdump will be configured properly.

2.2.2 When Installing Extensions via DVD During OS Installation, There Is No Way to Specify the Registration Code

During the installation of SUSE Linux Enterprise Server 12 SP1, you can choose to install extensions from DVD. When doing so, you will not be asked to register the extension by entering a registration code and will therefore also miss out on updates.

Consider installing extensions from the registration server instead. Alternatively, register the extension after the installation by running:

SUSEConnect -p <PRODUCT_NAME>/<VERSION>/<ARCHITECTURE> -r <REGISTRATION_CODE>

2.2.3 Creating an AutoYaST XML File

In the YaST installer, on the page Installation Settings, the headline Clone System Configuration and the button Export Configuration allow creating an AutoYaST XML file. However, generating files using these options will result in an invalid package selection and network configuration. This means that when the file is used for an installation, that installation could fail.

To create valid AutoYaST XML files, use an installed system. Ensure that the latest YaST updates have been installed and run:

yast clone_system

2.2.4 Building a Silver Image for Use in Private Clouds

When deploying a SLES to a private cloud, or any virtualization infrastructure, it is a common practice to build silver images with requisite software pre-installed, pre-configured for the intended environment.

When such a silver image gets cloned and deployed to a cloud platform, it will receive network MAC addresses different from the instance it was originally installed on. Therefore, clean up any mapping associated with persistent network device names:

rm -f /etc/udev/rules.d/70-persistent-net.rules

2.2.5 SMT: Upgrading Database Schema and Engine

SMT 12 comes with a new database schema and is standardized on the InnoDB database back-end.

In order to upgrade SMT 11 SPx to SMT 12, it is necessary that SMT 11 is configured against SCC (SUSE Customer Center) before initializing the upgrade of SLES and SMT to version 12 SP1 or newer. If the host is upgraded to SLES 12 SP1 or newer without switching to SCC first, the installed SMT instance will no longer work.

Only SMT 11 SP3 can be configured against SCC. Older versions need to be upgraded to version 11 SP3 first.

Whether the schema or database engine must be upgraded is checked during package upgrade and displayed as an update notification. Back up your database before doing the database upgrade. Both the schema and database engine upgrade are done by the utility /usr/bin/smt-schema-upgrade (can be called directly or via systemctl start smt-schema-upgrade) or are done automatically after smt.target restart (computer reboot or via systemctl restart smt.target). However, manual database tuning is required for optimal performance. For details, see https://mariadb.com/kb/en/mariadb/converting-tables-from-myisam-to-innodb/#non-index-issues (https://mariadb.com/kb/en/mariadb/converting-tables-from-myisam-to-innodb/#non-index-issues)

2.2.6 SMT Supports SCC Exclusively

Support for NCC (Novell Customer Center) was removed from SMT. SMT can still serve SLE 11 clients, but must be configured to receive updates from SCC.

Before migrating from SMT 11 SP3, SMT must be reconfigured against SCC. Migration from older versions of SMT is not possible.

2.2.7 Automatic Configuration of Snapper for LVM with AutoYaST

In SLE 12, AutoYaST configures Snapper if the root file system uses Btrfs. But only if LVM is not used for the root file system.

In SLES 12 SP1, AutoYaST always configures Snapper if the root file system uses Btrfs. If Snapper is not wanted, disable this feature in the AutoYaST profile.

2.2.8 Using an HTTP Proxy for Registration During the Installation

In networks that enforce the usage of a proxy server to access remote Web sites, registration was not possible during installation. This resulted in an initial installation without updated packages. Updates had to be installed in a separate step after system installation had been finished.

Before the installation begins, a proxy server can be configured in the boot loader by pressing F4. This enables both product registration and online updates during the installation workflow.

2.2.9 Installing into a Snapper-Controlled Btrfs Subvolume

Prior to SUSE Linux Enterprise 12 SP1, after the first rollback of the system the original root volume was no longer reachable and would never be removed automatically. This resulted in a disk space leak.

Starting with SP1, YaST installs the system into a subvolume controlled by Snapper.

2.2.10 Floppy Support Removed from yast2-bootloader

Probing floppy devices can slow down the boot process on some machines that have a timeout even if there is no floppy device.

yast2-bootloader no longer tries to detect nor supports floppy devices.

If it is needed to boot from a floppy disk, GRUB 2 can be configured manually.

2.2.11 CJK Languages Support in Text-mode Installation

CJK languages (Chinese, Japanese, and Korean) do not work properly during text-mode installation if the framebuffer is not used (Text Mode selected in boot loader).

There are three ways to resolve this issue:

  • Use English or some other non-CJK language for installation then switch to the CJK language later on a running system using YaST+System+Language.

  • Use your CJK language during installation, but do not choose Text Mode in the boot loader using F3 Video Mode. Select one of the other VGA modes instead. Select the CJK language of your choice using F2 Language, add textmode=1 to the boot loader command-line and start the installation.

  • Use graphical installation (or install remotely via SSH or VNC).

2.3 Upgrade-Related Notes

This section includes upgrade-related information for this release.

2.3.1 Automatic Removal of Add-on Products Must Be Manually Confirmed

When updating SUSE Linux Enterprise Server 12 to the Service Pack 1 release, any add-on products that have a hard dependency on the SUSE Linux Enterprise 12 GA release will be flagged for automatic removal. For example, this includes driver kits provided on https://drivers.suse.com (https://drivers.suse.com). On the installation settings page, a message will be displayed, similar to the following:

Update Options
* Some products are marked for automatic removal.
* Contact the vendor of the removed add-on to provide you with a new
  installation media
* Or select the appropriate online extension or module in the registration step
* Or to continue with product upgrade go to the software selection and mark
  the product (the -release package) for removal.
* Product SUSE Linux Enterprise Server 12 will be updated to SUSE Linux
  Enterprise Server 12 SP1 RC2
* Error: Product [NAME OF ADD-ON PRODUCT] will be automatically removed.
* Only update installed packages where [NAME OF ADD-ON PRODUCT] will be
  the name of the product to be removed.

If the add-on product named on the error line should not be removed, contact the vendor of the add-on for an update compatible with Service Pack 1 or verify that the correct extensions or modules are selected for update. In this state, the upgrade cannot continue.

If you want to remove the add-on product, manually delete the *-release packages associated with the add-on product.

YaST GUI:

  1. Click on Packages.

  2. Go to the Search pane.

  3. Search for release.

  4. Find the *-release packages flagged with a small red X underlined with three dots.

  5. Right click on the packages and select Delete.

  6. At this point the packages will be flagged with a large red X.

  7. Select Accept and confirm any package changes.

YaST on the command line:

  1. Select Packages.

  2. Search for release.

  3. Delete all *-release packages flagged with a- by selecting them and pressing -.

  4. The a will be removed and the packages will be flagged with -.

  5. Select Accept and confirm any package changes.

The error message will be replaced with a warning and the upgrade can continue.

2.3.2 Updating Registration Status After Rollback

When performing a service pack migration, it is necessary to change the configuration on the registration server to provide access to the new repositories. If the migration process is interrupted or reverted (via restoring from a backup or snapshot), the information on the registration server is inconsistent with the status of the system. This may lead to you being prevented from accessing update repositories or to wrong repositories being used on the client.

When a rollback is done via Snapper, the system will notify the registration server to ensure access to the correct repositories is set up during the boot process. If the system was restored any other way or the communication with the registration server failed for any reason (for example, because the server was not accessible due to network issues), trigger the rollback on the client manually by calling snapper rollback.

We suggest always checking that the correct repositories are set up on the system, especially after refreshing the service using zypper ref -s.

2.3.3 [GMC] Lower Version Numbers in SUSE Linux Enterprise 12 SP1 Than in Version 12

When upgrading from SUSE Linux Enterprise Server or Desktop 12 to SUSE Linux Enterprise Server or Desktop 12 SP1, you may experience a version downgrade of specific software packages, including the Linux Kernel.

This is expected behavior. It is important to remember that the version number is not sufficient to determine which bug fixes are applied to a software package.

All SLE 12 SP1 software packages and updates are contained in the SLE 12 SP1 repositories. No packages from SLE 12 repositories are needed for installation or upgrade.

In case you do add SLE 12 update repositories, be aware of one characteristic of the repository concept: Version numbers in the SLE 12 update repository can be higher than those in the SLE 12 SP1 repository. Thus, if you update with the SLE 12 repositories enabled, you may receive the SLE 12 version of a package instead of the SLE 12 SP1 version.

Using package versions from a lower product version or SP can result in unwanted side effects. If you do not need them, switch off all SLE 12 repositories.

Only keep old repositories if your system depends on a specific older version of a package. If you need a package from a lower product version or SP though, and thus have SLE 12 repositories enabled, make sure that the packages you intended to upgrade have actually been upgraded.

2.3.4 Online Migration with Debuginfo Packages Not Supported

Online migration from SLE 12 to SLE 12 SP1 is not supported if debuginfo packages are installed.

2.3.5 New Method of Online Migration Between Service Packs

In the past, you could use YaST Wagon to migrate between Service Packs. YaST Wagon is now unsupported.

You can now use either the YaST Online Migration module or zypper migration.

To learn more about migrating between Service Packs, see the section Service Pack Migration in the Deployment Guide: https://www.suse.com/documentation/sles-12/book_sle_deployment/data/cha_update_spmigration.html (https://www.suse.com/documentation/sles-12/book_sle_deployment/data/cha_update_spmigration.html).

Note that when performing the SP migration, both YaST and Zypper will install all recommended packages. Especially in the case of custom minimal installations, this may increase the installation size of the system significantly.

There are two ways to change this behavior:

  • To change the default behavior of YaST, adjust /etc/zypp/zypp.conf and set the variable solver.onlyRequires = true. This changes the behavior of all package operations, such as the installation of patches or new packages. For YaST, this is the only solution.

  • To change the default behavior of Zypper, adjust /etc/zypp/zypp.conf and set the variable solver.onlyRequires = true and make sure installRecommends is not set to true. This changes the behavior of all package operations, such as the installation of patches or new packages.

  • To change the behavior of Zypper for a single invocation, add the parameter --no-recommends to your command line.

2.4 For More Information

For more information, see Section 3, “Infrastructure, Package and Architecture Specific Information”.

3 Infrastructure, Package and Architecture Specific Information

3.1 Architecture Independent Information

3.1.1 Kernel

3.1.1.1 Opt-in Memory cgroup Isolation

Memory cgroups help to put workloads into separate groups where each workload's memory usage can be restricted by a separate memory limit (memory.hard_limit_in_bytes). Memory pressure in one group is limited only to this group, granted the system as a whole is not under memory pressure. This can be used for a basic memory isolation of different workloads. Such a configuration, however, requires that all the processes are capped in a limited memory cgroup and those have to be configured so that the system is not put under the global memory pressure.

Starting with SLE 12 SP1 memory cgroup offers a new mechanism allowing easier workload opt-in isolation. Memory cgroup can define its so called low limit (memory.low_limit_in_bytes), which works as a protection from memory pressure. Workloads that need to be isolated from outside memory management activity should set the value to the expected Resident Set Size (RSS) plus some head room. If a memory pressure condition triggers on the system and the particular group is still under its low limit, its memory is protected from reclaim. As a result, workloads outside of the cgroup do not need the aforementioned capping.

3.1.2 Kernel Modules

An important requirement for every Enterprise operating system is the level of support a customer receives for his environment. Kernel modules are the most relevant connector between hardware ("controllers") and the operating system.

For more information about the handling of kernel modules, see the SUSE Linux Enterprise Administration Guide.

3.1.2.1 Memory Compression with zswap

Usually, when a system's physical memory is exceeded, the system moves some memory onto reserved space on a hard drive, called "swap" space. This frees physical memory space for additional use. However, this process of "swapping" memory onto (and off a hard drive is much slower than direct memory access, so it can slow down the entire system.

The zswap driver inserts itself between the system and the swap hard drive, and instead of writing memory to a hard drive, it compresses memory. This speeds up both writing to swap and reading from swap, which results in better overall system performance while using swap.

To enable the zswap driver, write 1 or Y to the file /sys/module/zswap/parameters/enabled.

Storage Back-ends

There are two back-ends available for storing compressed pages, zbud (the default), and zsmalloc. The two back-ends each have their own advantages and disadvantages:

  • The effective compression ratio of zbud cannot exceed 50 percent. That is, it can at most store two uncompressed pages in one compressed page. If the workload's compression ratio exceeds 50% for all pages, zbud will not be able to save any memory.

  • zsmalloc can achieve better compression ratios. However, it is more complex and its performance is less predictable.

  • zsmalloc does not free pages when the limit set in /sys/module/zswap/parameters/max_pool_percent is reached. This is reflected by the counter /sys/kernel/debug/zswap/reject_reclaim_fail.

It is not possible to give a general recommendation on which storage back-end should be used, as the decision is highly dependent on workload. To change the storage back-end, write either zbud or zsmalloc to the file /sys/module/zswap/parameters/zpool. Pick the back-end before enabling zswap. Changing it later is unsupported.

Setting zswap Memory

Compressed memory still uses a certain amount of memory, so zswap has a limit to the amount of memory which will be stored compressed, which is controllable through the file /sys/module/zswap/parameters/max_pool_percent. By default, this is set to 20, which indicates zswap will use 20 percent of the total system physical memory to store compressed memory.

The zswap memory limit has to be carefully configured. Setting the limit too high can lead to premature out-of-memory situations that would not exist without zswap, if the memory is filled by non-swappable non-reclaimable pages. This includes mlocked memory and pages locked by drivers and other kernel users.

For the same reason, performance can also be hurt by compression/decompression if the current workload's workset would, for example, fit into 90 percent of the available RAM, but 20 percent of RAM is already occupied by zswap. This means that the missing 10 percent of uncompressed RAM would constantly be swapped out of/in to the memory area compressed by zswap, while the rest of the memory compressed by zswap would hold pages that were swapped out earlier which are currently unused. There is no mechanism that would result in gradual writeback of those unused pages to let the uncompressed memory grow.

Freeing zswap Memory

zswap will only free its pages in certain situations:

  • The processes using the pages free the pages or exit

  • When the storage back-end zbud is in use, zswap will also free memory when its configured memory limit is exceeded. In this case, the oldest zswap pages are written back to disk-based swap.

Memory Allocation Issues

In theory, it can happen that zswap is not yet exceeding its memory limit, but already fails to allocate memory to store compressed pages. In that case, it will refuse to compress any new pages and they will be swapped to disk immediately. For confirmation whether this issue is occurring, check the value of /sys/kernel/debug/zswap/reject_alloc_fail.

3.1.3 Security

3.1.3.1 Secure Hash Algorithm 2 in OpenLDAP Password Operations

SHA-2 is a set of new hash functions featuring a higher level of security than its predecessor SHA-1. This OpenLDAP update brings the possibility to use SHA-2 in LDAP-hashed passwords.

Before using the new hash functions, the OpenLDAP configuration has to be adjusted to load a new module located at:

/usr/lib/openldap/pw-sha2.la

Here is an example based on slapd.conf style configuration:

moduleload /usr/lib/openldap/pw-sha2.la

If you are using slapd.conf style configuration, the OpenLDAP server will have to be restarted for the new module to become live.

3.1.3.2 Zypper Can List Already Installed and Unneeded Patches by CVE Number

The Zypper subcommands zypper list-patches and zypper patch are aware of the CVE metadata inside the update information. However, zypper search is not able to search for CVE numbers in the patch metadata. zypper list-patches only lists patches applicable to your system by default.

To find out whether a fix for a specific CVE number is installed or necessary for your system, you need to search for the CVE number in all patches, including installed and unneeded ones. To do so, for example, run:

zypper list-patches -a --cve="CVE-2015-7547"

(Note: This entry was corrected. Previously, it incorrectly stated that zypper search has the desired functionality.)

3.1.3.3 Incorrect SSL/TLS Certificate Verification Considered a Defect

Bugs and vulnerabilities in software packages could be discovered relating to verification of SSL/TLS certificates, intermediate certificates, certificate chains, or certificate revocation lists. These elements protect the security, confidentiality, and integrity of the user's systems and activities.

Any incorrect verification of SSL/TLS certificates, intermediate certificates, certificate chains or certificate revocation lists, will be considered a security issue and will be corrected, at the potential detriment of customer backwards compatibility scenarios.

3.1.3.4 Password Protection Behavior for Boot Entries

With SUSE Linux Enterprise 12, booting other systems was very restricted. A password was needed to even select a different boot entry.

With SP1, the old behavior known from SLE 11 is back. Now again, a password is needed only for modifying a boot loader entry. Anyone can boot any entry and the default boot entry is automatically used if the timeout has passed.

There is now a new configuration option in the yast2-bootloader dialog, where you can enable the restricted boot behavior. Once enabled, a password is also needed to select a different boot entry.

3.1.3.5 SELinux Enablement

SELinux capabilities have been added to SUSE Linux Enterprise Server (in addition to other frameworks, such as AppArmor). While SELinux is not enabled by default, customers can run SELinux with SUSE Linux Enterprise Server if they choose to.

SELinux Enablement includes the following:

  • The kernel ships with SELinux support.

  • We will apply SELinux patches to all “common” userland packages.

  • The libraries required for SELinux (libselinux, libsepol, libsemanage, etc.) have been added SUSE Linux Enterprise.

  • Quality Assurance is performed with SELinux disabled—to make sure that SELinux patches do not break the default delivery and the majority of packages.

  • The SELinux-specific tools are shipped as part of the default distribution delivery.

  • SELinux policies are not provided by SUSE. Supported policies may be available from the repositories in the future.

  • Customers and Partners who have an interest in using SELinux in their solutions are encouraged to contact SUSE to evaluate their necessary level of support and how support and services for their specific SELinux policies will be granted.

By enabling SELinux in our code base, we add community code to offer customers the option to use SELinux without replacing significant parts of the distribution.

3.1.4 Networking

3.1.4.1 No Support for Samba as Active Directory-Style Domain Controller

The version of Samba shipped with SLE 12 GA and newer does not include support to operate as an Active Directory-style domain controller. This functionality is currently disabled, as it lacks integration with system-wide MIT Kerberos.

3.1.4.2 Customizing the Name of Network Interfaces

You can customize the name of a network interface in YaST using the Network Settings dialog.

Customization is only supported with types eth, bond, bridge, and vlan. Other device types cannot be renamed.

3.1.4.3 Intel 10GbE PCI Express Adapters: Setting MAC Address of a VF Interface

When using the ixgbe and ixgbevf drivers (Intel 10GbE PCI Express adapters) in the context of SRIOV, it is possible to set the MAC address of a VF interface via two methods:

  • through the PF: This would typically be done on the virtualization host using a command such as ip link set p4p1 vf 0 mac d6:2f:a7:28:78:c2

  • through the VF: This would typically be done on the virtualization guest using a command such as ip link set eth0 address d6:2f:a7:28:78:c2

Initially, either methods are permitted. However, after the administrator has explicitly configured a MAC address for a VF through its PF, the ixgbe driver disallows further changes of the MAC address through the VF. For example, if an attempt is made to change the MAC address through the VF on a guest after the MAC address for this device has been set on the host, the host will log a warning of the following form:

[  884.838134] ixgbe 0000:08:00.0: p4p1: VF 0 attempted to override administratively set MAC address
[  884.838138] Reload the VF driver to resume operations

To avoid this problem, either avoid configuring an address for the VF through the PF (on the virtualization host) and let a trusted guest set whatever MAC address is desired, or set the desired MAC address through the PF such that further changes through the VF are not needed.

3.1.4.4 Enabling NFSv2 Support

nfs-utils 1.2.9 changed the default so that NFSv2 is not served unless explicitly requested.

If your clients still depend on NFSv2, enable it on the server by setting

NFSD_OPTIONS="-V2"
MOUNTD_OPTIONS="-V2"

in /etc/sysconfig/nfs. After restarting the service, check whether version 2 is available with the command:

cat /proc/fs/nfsd/versions
==>
+2 +3 +4 +4.1 -4.2
3.1.4.5 Davfs2 FUSE Based Support Added

Accessing WebDAV enabled server resources is now supported also through the FUSE based davfs2 utility.

3.1.4.6 NFSv4 only configuration

NFS traditionally requires a number of different services to be running including nfsd, lockd, statd, and rpcbind (previously known as portmap). The more network services are running, the greater the possible attack surface, so reducing the number of services is encouraged when possible.

NFSv4 is a more focused protocol than previous versions and does not need as many services to be running. Disabling NFSv3 (and lower) can reduce the attack surface. Setting NFS3_SERVER_SUPPORT=no in /etc/sysconfig/nfs will disable NFSv3 support and ensure that lockd and statd are not started. It does not prevent rpcbind from starting though. This is in part because rpcbind is needed for other services including NIS (aka yellow-pages) and automatically determining that none of these are required is problematic.

If rpcbind is not needed (so only NFSv4 is used), it can be disabled with the command:

systemctl disable rpcbind.socket
3.1.4.7 Support for netgroups in snmp

Host-based security so far needed hosts to be listed explicitly. For larger groups of hosts, this gets cumbersome and hard to maintain.

net-snmp now also allows specifying netgroups in host-based access restriction patterns by prefixing them with @ (creating, for example @netgroupname).

With this change, the following can be done much easier:

rocommunity public monitoringnode1
rocommunity public monitoringnode2
rocommunity public monitoringnode3

If all nodes from above are part of a netgroup called monitoringnodes, you can now use a single line:

rocommunity public @monitoringnodes
3.1.4.8 snmpstatus Allows Suppressing Status Messages of Network Interfaces

snmpstatus can report some NIC states wrongly, causing a flood of snmp messages.

snmpstatus now has two new optional flags -Si and -Sn which allow you to suppress certain messages. For details, see the man page of snmpstatus.

3.2 Systems Management

3.2.1 The YaST Module for SSH Server Configuration Has Been Removed

The YaST module for configuring an SSH server which was present in SLE 11, is not a part of SLE 12. It does not have any direct successor.

The module SSH Server only supported configuring a small subset of all SSH server capabilities. Therefore, the functionality of the module can be replaced by using a combination of 2 YaST modules: The /etc/sysconfig Editor and the Services Manager. This also applies to system configuration via AutoYaST.

3.2.2 Return Codes of Zypper

In some conditions, Zypper returned an exit value of 0 even if it was aborted. This happened if a user decided to abort because of conflicts or if Zypper ran in a non-interactive mode, but was waiting for user input.

In these cases, Zypper will now return a suitable error code.

3.2.3 Recommended Packages (Weak Dependencies)

In the past, the YaST Qt package manager UI auto-installed packages for already installed packages.

The YaST Qt packager UI no longer defaults to installing recommended packages for already installed packages. The persistent option controlling this feature is moved to a one-time option Extras/Install All Matching Recommended Packages (https://bugzilla.suse.com/show_bug.cgi?id=902394 (https://bugzilla.suse.com/show_bug.cgi?id=902394)).

For newly installed packages, the weak dependencies are still installed by default, but now this can be disabled in the UI with the option Dependencies/Install Recommended Packages (https://bugzilla.suse.com/show_bug.cgi?id=900853 (https://bugzilla.suse.com/show_bug.cgi?id=900853)).

The YaST Qt packager UI is also using the configuration file /etc/sysconfig/yast2 as the ncurses package manager UI (https://bugzilla.suse.com/show_bug.cgi?id=900853 (https://bugzilla.suse.com/show_bug.cgi?id=900853)).

We do not recommend disabling the installation of recommended packages during installation unless you are trying to install a minimal system. If you do so, make sure to review the package selection. The functionality associated with patterns expects recommended packages to be installed, too. If any of them are missing, some functionality of the patterns may not be available.

3.2.4 Rollback of Service Packs

After a rollback of a Service Pack to the previous version, SCC and SMT need to be informed about this to reset the access rules for the old product repositories.

During a Service Pack migration, SCC and SMT change the access rules for this machine and switch it from the old Service Pack to the new one. If you roll back from a Service Pack, SCC and SMT will not grant access to the old repositories again until they are told to do so.

If the rollback is done via Snapper (that is, using Btrfs snapshots), services and repositories are adjusted automatically. If this fails, or the rollback is done in other ways (for example, using VMware/KVM/LVM snapshots, or restoring from a backup), this needs to be done manually using:

SUSEConnect --rollback

3.2.5 SUSEConnect Now Autorefreshes the Services

When a new product gets activated with the SUSEConnect command line tool, the added service will be configured to autorefresh periodically. This is now fully compatible with YaST's behavior.

3.2.6 Installing only patches from a certain category with a certain severity

For some systems, it may be desired to limit patches that are installed to those that are most necessary.

The version of Zypper shipped with SLE 12 SP1 now allows more fine-grained control over which patches are installed using the --severity parameter. For example, to install only critical security patches, use:

zypper patch --category security --severity critical

3.2.7 New Option --sync for snapper Delete Command

Btrfs frees disk space asynchronously after deleting snapshots. So if the user deletes Snapper-controlled snapshots, the user must either wait or manually call several Btrfs commands to have the disk space actually freed.

The delete command of Snapper has a new option --sync that triggers Btrfs to free the disk space and waits until the disk space is actually freed.

3.2.8 Boot Menu Entry 'Failsafe' Has Been Removed

The boot menu entry 'Failsafe' has been removed from the list of default boot menu entries, as in case of an error, the "right" combination of parameters to fix needs to be found anyway.

SLE 12 also supports booting from snapshots which are meant to provide a fallback in case the system is not booting after a change.

3.2.9 Re-Introduce bootcycle Functionality for GRUB 2

The bootcycle package from SLE 11 was re-introduced in SLE 12 SP1.

3.2.10 Read-Only Root File System

It is possible to run SUSE Linux Enterprise 12 on a shared read-only root file system. A read-only root setup consists of the read-only root file system, a scratch and a state file system. The /etc/rwtab file defines, which files and directories on the read-only root file system are replaced with which files on the state and scratch file systems for each system instance.

The readonlyroot kernel command line option enables read-only root mode; the state= and scratch= kernel command line options determine the devices, on which the state and scratch file systems are located.

In order to set up a system with a read-only root file system, set up a scratch file system, set up a file system to use for storing persistent per-instance state, adjust /etc/rwtab as needed, add the appropriate kernel command line options to your boot loader configuration, replace /etc/mtab with a symlink to /proc/mounts as described below, and (re)boot the system.

Replace /etc/mtab with the appropriate symbolic links:

ln -sf /proc/mounts /etc/mtab

See the rwtab(5) manual page for more information and http://www.redbooks.ibm.com/abstracts/redp4322.html (http://www.redbooks.ibm.com/abstracts/redp4322.html) for limitations on System z.

3.2.11 YaST Snapper Module Uses Snapper DBus Interface

The YaST Snapper module bypassed the snapperd daemon while working on Snapper-controlled snapshots. This could lead to inconsistent data between YaST Snapper and Snapper.

The YaST Snapper module now uses the DBus interface when working on Snapper-controlled snapshots.

3.2.12 openlmi-bmc

lmi-bmc is a CIM-XML provider based on openlmi framework. This provider publishes information about the service processor via CIM-XML. The information currently includes:

  • IP addresses

  • MAC address

  • VLAN ID

  • Firmware Version

  • List of supported protocols

  • Interface mode (shared vs dedicated)

3.2.13 Zypp History Now Includes Patch Installation

The Zypp history file now includes information about patch installation. The lines in the history are tagged |command| and show the user, command line and optional user data of the process that triggered the commit.

3.3 Performance Related Information

3.3.1 The sapconf Package: Dependency Changes and New Features

sapconf is a software package for automated system tuning, dedicated to SUSE Enterprise Linux users who wish to run SAP software products. The new release of sapconf comes with important dependency changes, and brings new features.

In previous releases, sapconf automatically tuned a system for SAP Netweaver products only. It provided a system service (sapconf.service) and an executable (/usr/sbin/sapconf). The executable file mimics a SysV-style init script and executes an action (start, restart, try-restart, reload, status, stop) based on command line parameter input.

In this release, the implementation of sapconf is revamped to offer more features and tuning profiles for the system tuning daemon (tuned.service), while keeping compatibility with the previous releases.

If you have been using a previous release of sapconf and plan to upgrade to SLES 12 SP1, please check that tuned package is installed after running the distribution upgrade (run as root: zypper install tuned), because in certain cases a distribution upgrade may ignore the dependency on tuned package and thus resulting in a non-functioning sapconf; if the package tuned is not installed, sapconf will prompt you to install the missing package when it runs.

As with the previous releases, invoking sapconf with action start will activate SAP Netweaver tuning profile only if an SAP-product tuning profile (sap-hana, sap-netweaver) is not active at the time; if an SAP-product tuning profile is already active (such as sap-hana or sap-netweaver), sapconf will simply make sure that tuning is activated, without forcibly switching to sap-netweaver tuning profile.

With this release, sapconf comes with two more actions: "hana" and "b1" (BusinessOne). Invoking sapconf (for example "sapconf hana") with either action name will result in SAP HANA tuning profile being activated. SAP HANA and BusinessOne use an identical method for system tuning, thus there exists only one SAP HANA tuning profile for both cases.

System service sapconf.service continues to stay and it makes sure that system tuning is automatically applied upon reboot.

For more information, see the following new or updated manual pages:

  • sapconf (8)

  • tuned-adm (8)

  • tuned-profiles-sap-hana (7)

  • tuned-profiles-sap-netweaver (7)

3.3.2 Using the "noop" I/O Scheduler for Multipathing and Virtualization Guests

For advanced storage configurations like 4-way multipath to an array, we will end up with an environment where the host OS is scheduling I/O and the storage array is scheduling I/O. It is a common occurrence that those schedulers end up competing with each other and ultimately degrade performance. Because the storage array has the best view of what the storage is doing at any given time, enabling the noop scheduler on the host is telling the OS to just get out of the way and let the array handle all of the scheduling.

Following the same rationale, also for block devices within virtualization guests the noop I/O scheduler should be used.

To change the I/O scheduler for a specific block device, use:

echo [scheduler name] > /sys/block/[device]/queue/scheduler

For more information, see the SUSE Linux Enterprise System Analysis and Tuning Guide.

3.3.3 NFS Tuning

On systems with a high NFS load, connections may block.

To work around such performance regressions with NFSv4, you could open more than one TCP connection to the same physical host. This could be accomplished with the following mount options:

To request that the transport is not shared use

mount -o sharetransport=N server:/path /mountpoint

Where N is unique. If N is different for two mounts, they will not share the transport. If N is the same, they might (if they are for the same server, etc).

3.4 Storage

3.4.1 Root File System Conversion to Btrfs Not Supported

If it is not the root file system and if the file system has at least 20 % free space available, in-place conversion of an existing Ext2/Ext3/Ext4 or ReiserFS file system is supported for data mount points.

SUSE does not recommend or support in-place conversion of OS root file systems. In-place conversion to Btrfs of root file systems requires manual subvolume configuration and additional configuration changes that are not automatically applied for all use cases.

To ensure data integrity and the highest level of customer satisfaction, when upgrading, maintain existing root file systems. Alternatively, reinstall the entire operating system.

3.4.2 Safer Btrfs Balance Operation

First-time users of Btrfs often run the following command without any options:

btrfs balance start

This results in a long I/O-intensive operation that has a noticeable impact on system performance.

The btrfs balance command requires additional options to perform a full file system re-balance. For more information, see the man page btrfs-balance(8).

3.4.3 Loopback Mounting of Images with 4k blocksize

It is now possible to loopback mount images with 4k blocksize.

3.4.4 Ceph RADOS Block Device (RBD) Kernel Module

The Ceph RADOS Block Device (RBD) kernel module has been updated to include the following features:

  • Support for block discard requests, allowing for significantly improved space utilization. RBD can be notified of unused file system blocks via the fstrim command or otherwise. RBD is now capable of forwarding these requests to the underlying OSDs, to ensure that unused space is freed up within the Ceph RADOS pool.This feature is available for immediate use with existing RBD images and pools.

  • Prefix OSD writes with allocation hints in an effort to reduce fragmentation.

  • Enable message signatures by default for improved message integrity.

  • Support for erasure coded pools. Images residing on erasure coded RADOS pools can now be mapped locally with RBD.

3.4.5 Setting up kdump on Systems with Many Devices

Although required crash kernel reservation size does not depend on total installed RAM (contrary to popular belief), it does depend on the number of attached devices. A system connected to multiple SANs with hundreds of LUNs requires more RAM than a system with a single internal SATA disk.

With SLE 12 SP1, it is possible to configure very large crash kernel reservations. The amount can be configured during installation or at any later time using YaST Kdump module. See System Analysis and Tuning Guide for more information on choosing a suitable value.

3.4.6 Creating New Subvolumes Underneath the / Hierarchy

Creating a new subvolume underneath the / hierarchy after system installation and after the first snapshot to the / filesystem is supported. If the new subvolume is permanently mounted via /etc/fstab (that is, in a way that the new subvolume is also available in future snapshots), the snapshot the subvolume has been created at cannot be deleted anymore, though.

Example:

  • The root file system is mounted from /@/.snapshots/212/snapshot

  • A new subvolume is created as /mynewsubvol

  • This technically translates to /@/.snapshots/212/snapshot/mynewsubvol

  • The respective /etc/fstab entry looks like this: /dev/sda /mynewsubvol btrfs subvol=/@/.snapshots/212/snapshot/mynewsubvol 0 0

  • Removing Snapshot 212 will fail

Technical Reason: Subvolumes in Btrfs always need an origin in the filesystem tree, and this origin is the original point where the subvolume has been created, that is, literally the path. In our example:

/@/.snapshots/212/snapshot/mynewsubvol

A subvolume which should be permanently mounted via /etc/fstab should be created from an origin which is not a snapshot itself. This is why the /@/ subvolume has been created: It serves as an independent root for permanent subvolumes such as /var, /srv, etc.

3.4.7 LVM: using lvm_metad to activate volume

With SLES 12 SP1 lvmetad is now enabled by default. With lvmetad lvm command can now work faster as they need not to scan the disks for the information, but can ask lvmetad about the disk information.

3.4.8 YaST iSCSI Client Does Not Change Startup Mode of Already Connected Targets

There was no possibility to keep the startup mode ('automatic', 'manual', or 'on boot') of already connected targets. Using either 'Discovery' on 'Discovered Targets' or 'Add' on 'Connected Targets' screen has reset the startup mode to default mode 'manual'.

Now It is possible when using the 'Add' button on 'Connected Targets' screen to detect additional targets. The startup mode of the targets already connected will not change then.

3.4.9 pNFS Client Block Mode Support

SLE 12 already supported pNFS File and Objects models as a pNFS client, but not block mode.

The pNFS client included with SLE 12 SP1 now supports block mode.

Block-mode pNFS involves the NFS client accessing the data directly using iSCSI or fibre channel (or similar) rather than using NFS. NFS is used for metadata management, for creating and deleting files and for finding out where in the block devices the data is. Once the client knows where the data is and has permission from the server, it accesses the storage device directly.

The client needs to be running blkmapd which uses device-mapper to stitch together various block devices into a form that the NFS client can use.

blkmapd can be started with:

systemctl enable nfs-blkmap

3.4.10 Configuring iSNS Server Address Required

If the target-isns package is installed, you can use YaST to configure the iSNS Server Address.

Alternately, edit /etc/target-isns.conf after installation: Set the parameter isns_server to the IP address of the server where an iSNS daemon is running.

3.4.11 Multitier Block I/O Caching

While flash-based storage is fast, traditional rotational hard disks are slow but provide an excellent price per gigabyte. Multitier I/O Caching describes the means of implementing a cache to amplify read and write operations in the fast, but smaller and expensive flash based storage.

SUSE Linux Enterprise Server implements two different ways of performing caching between flash and rotational devices:

  • bcache

  • dm-cache

Both caching solutions provide the following caching modes:

  • Write-through

  • Write-back

  • Pass-through/Write-around

dm-cache requires a backing device (HDD), a caching device (SSD, NVMe) and a metadata device. bcache only requires a backing device and a caching device.

Both systems only work on block devices. Thus, network filesystems like NFS cannot be used as backing stores.

3.5 Virtualization

3.5.1 Virtual Machine Driver Pack 2.3

SUSE Linux Enterprise Virtual Machine Driver Pack is a set of paravirtualized device drivers for Microsoft Windows operating systems. These drivers improve the performance of unmodified Windows guest operating systems that are run in virtual environments created using Xen or KVM hypervisors with SUSE Linux Enterprise Server 11 SP4 and SUSE Linux Enterprise Server 12 SP1. Paravirtualized device drivers are installed in virtual machine instances of operating systems and represent hardware and functionality similar to the underlying physical hardware used by the system virtualization software layer.

The new features of SUSE Linux Enterprise Virtual Machine Driver Pack 2.3 include:

  • Support for SUSE Linux Enterprise Server 12 SP1

  • Support for SUSE Linux Enterprise Server 11 SP4

  • Support for new Microsoft Windows releases

  • The drivers have been changed to the unified driver model first seen in Windows 2012r2. This makes the Xen to KVM transition much easier, as there is no need to change drivers.

  • Named pipes are now supported in guest agent.

  • Memory stats are now reported in the balloon driver (virtio).

  • Balloon driver fixes: Ballooning crash fix (Xen and virtio).

  • Hibernation fix (virtio).

  • Block driver fixes: Codepage 83 is now supported.

  • SCSI driver fixes: hibernation fix (virtio), blue screen due to pagefault fix (virtio).

  • Block/SCSI drivers: In the absence of a "real" serial number, "0" (character zero) is used rather than the previous eight space character string.

  • Xen ballooning fix: migrations now complete without freezing or crashing the VM.

  • Setup fixes: can now upgrade from VMDP 1.7 correctly. Setup now correctly identifies the running hypervisor when VMs are installed on SLES 12 SP1

For more information on VMDP 2.3, see the official documentation.

3.5.2 KVM

3.5.2.1 RDMA-Based Live Guest Migration

RDMA-based (Remote Direct Memory Access) live guest migration helps make the migration of VM guests more deterministic under heavy load because it significantly reduces latency and increases throughput over TCP/IP.

3.5.2.2 Containment of SR-IOV Device Errors When VFs Are Assigned to Guests

If an SR-IOV device encounters an error and if any of the VFs belonging to the SR-IOV device is assigned to a guest(s), the affected guest(s) will be brought down without impacting any other running guests or the host. The VM guests can be brought up after the host driver for the SR-IOV device recovers the device.

3.5.2.3 KVM: Supervisor Mode Access Prevention (SMAP)

SMAP prevent the kernel from accessing user space and misusing user space contents as trusted. This feature can be exposed to a KVM guest OS.

3.5.2.4 KVM: Make Guests NUMA Aware

NUMA host topology could be reflected and specified in the VMGuest OS using the numa element.

Qemu now supports for pinning memory on host NUMA nodes. The existing options -mem-prealloc, -mem-path, -machine mem-merge and -machine dump-guest-core are subsumed by the QOM objects memory-backend-ram and memory-backend-file, and by the memdev option of -numa node.

3.5.2.5 KVM: VM Guest OS with Memory Hotplugging Capability

QEMU now supports memory hotplugging using the new pc-dimm device and the QOM objects memory-backend-ram and memory-backend-file.

3.5.3 XEN

3.5.3.1 Enabling Indirect Descriptors in the blkfront Module

The blkfront driver originally shipped with SLES 12 and SLES 12 SP1 (Paravirtualization: xenblk, Hardware Virtual Machine: xen-vbd) does not enable indirect descriptors in environments not hosted on SUSE systems (such as those from Amazon, Oracle, etc.). This can lead to lower data bandwidth on single volumes in Xen.

To allow improving data bandwidth, an additional blkfront driver (Paravirtualization: xen-blkfront, Hardware Virtual Machine: xen-vbd-upstream) has been added to the packages kernel-xen and xen-default-kmp.

The original blkfront module has been preserved and will continue to be the default blkfront module for both Paravirtualization and Hardware Virtual Machine environments. modprobe rules can then be used to determine which blkfront driver is used.

3.5.3.2 GRUB Does Not Support vfb/vkbd Any More

The version of GRUB shipped with SLES 12 SP1 and SP2 does not support vfb/vkbd any more. This means that in Xen paravirtualized machines, there is no graphical display available while GRUB is active.

To be able to see and interact with GRUB, switch to the text-based xencon protocol: Modify the kernel parameter of the PV guest, add console=hvc0 xencons=tty, and connect with the command console DOMAINNAME of the libvirt toolstack.

3.5.3.3 Xen: Supervisor Mode Access Prevention (SMAP)

SMAP prevents the kernel from accessing user space and misusing user space contents as trusted. This feature can be exposed to a Xen guest OS.

3.5.4 Containers

3.5.4.1 zypper-docker, the Updater for Docker Images

To discover if a Docker container is in need of an update, a manual zypper lu was needed. After patching, the changes had to be committed to make them persistent, and the container needed to be restarted. This was necessary for each container.

Use zypper-docker to list and apply updates to your Docker images. This ensures any container based on the given image will receive the updates.

3.5.4.2 Docker Image: Support for SLES 12 SP1

The SLES 12 SP1 docker image is now added to the containers module.

3.5.4.3 Docker Image: Support for SLES 11 SP4

The SLES 11 SP4 docker image is now added to the containers module.

3.5.5 libvirt

3.5.5.1 SUSE Enterprise Storage (Powered by Ceph) Client

This update provides the functionality required for SUSE Linux Enterprise Server 12 to act as a client for SUSE Enterprise Storage.

qemu can now use storage provided by the SUSE Enterprise Storage Ceph cluster via the RADOS Block Device (rbd) back-end. Applications can now be enhanced to directly incorporate object or block storage backed by the SUSE Enterprise Storage cluster, by linking with the librados and librbd client libraries.

Also included is the rbd tool to manage RADOS block devices mapped via the rbd kernel module, for use as a standard generic block device.

3.5.5.2 Support for virtlockd in the libvirt Xen Driver

The libvirt Xen drivers integrate the support of virtlockd to provide locking of virtual machine resources such as disks. This prevents simultaneous use of resources by multiple virtual machines.

3.5.6 Others

3.5.6.1 virt-manager Update

virt-manager version 1.2.1 is included in SLE 12 SP1. Check virt-manager 1.2.1 NEWS (https://git.fedorahosted.org/cgit/virt-manager.git/tree/NEWS?id=v1.2.1) for more information.

3.5.6.2 Libguestfs: add Python bindings

Libguestfs python binding is now provided with guestfs tools.

3.5.6.3 virt-manager: hide PCI/SR-IOV devices already claimed/used by VMs

virt-manager hide PCI/SR-IOV devices that are unavailable for PCI passthrough/SR-IOV if other VMs already claim those devices.

3.5.6.4 virt-manager: Setting Up SR-IOV Devices During VM Guest OS Creation

virt-manager provides the ability to set up SR-IOV devices at VMGuest OS creation time.

4 AMD64/Intel64 64-Bit (x86_64) Specific Information

4.1 Virtualization

4.1.1 Inclusion of virt-top Tools

virt-top is a top-like utility for showing statistics for virtualized domains. Many keys and command line options are the same as for ordinary top.

5 POWER (ppc64le) Specific Information

5.1 Starting X After Upgrading to SLES 12 SP1

On SLES 12 on the POWER architecture, the display manager is configured not to start a local X server by default. On SLES 12 SP1 installations on this architecture, the default setting has been changed: The display manager now starts an X server.

To avoid problems during upgrade, the SLES 12 setting is not changed automatically. If you want the display manager to start an X server after the upgrade, open /etc/sysconfig/displaymanager and edit:

DISPLAYMANAGER_STARTS_XSERVER="no"

to read:

DISPLAYMANAGER_STARTS_XSERVER="yes"

Despite the default setting, due to a bug, GDM started an X server on SLES 12 (https://bugzilla.suse.com/show_bug.cgi?id=919723 (https://bugzilla.suse.com/show_bug.cgi?id=919723)). This problem has been fixed with a maintenance update for SLES 12 and SLES 12 SP1.

5.2 SystemTap Probes Support on ppc64le

With SLES 12 SP1, the SystemTap tool can now probe symbols on the ppc64le platform.

5.3 Virtual Ethernet: Large Send / Receive Offload for ibmveth

SLES 12 SP1 contains an update to the ibmveth driver that enables GRO (Generic Receive Offload) and TSO (TCP Segmentation Offload) for both IPv4 and IPv6.

This feature allows Linux to send large packets resulting in a performance improvement.

5.4 Container support for Docker on IBM Power

Provides the infrastructure and tool set to manage and deploy applications based on docker images based on gccgo.

5.5 vmalloc address translation support in makedumpfile for ppc64le arch

Makedumpfile tool supports filtering out sensitive data (e.g. security keys and confidential data) from a vmcore file, through '--eppic' and '--config' options. But to use these options, vmalloc address translation support is necessary. This feature adds vmalloc address translation support to makedumpfile tool so as to help in filtering out sensitive kernel data from vmcore file before passing it on.

5.6 YaST Support to Configure Firmware-assisted Dump for ppc64le

Earlier, manual steps were needed to configure fadump. This feature adds support to configure firmware-assisted dump in yast2-kdump.

In the YaST kdump dialog, selecting "Enable Kdump" & "Use Firmware-Assisted Dump", configures the system with fadump.

6 System z (s390x) Specific Information

For more information, see http://www.ibm.com/developerworks/linux/linux390/documentation_novell_suse.html

IBM zEnterprise 196 (z196) and IBM zEnterprise 114 (z114) further on referred to as z196 and z114.

6.1 Hardware

6.1.1 zEDC Compression

zEDC compression can be used via the GenWQE (Generic Workqueue Engine) device driver. It is a PCIe card used to do zlib style compression and decompression according to RFC1950, RFC1951, and RFC1952.

6.1.2 snIPL interface to control dynamic CPU capacity

Remote control of the capacity of target systems in HA setups allows to maintain the bandwidth during failure situations and removes the need for keeping unused capacity activated during normal operation

6.1.3 PCI infrastructure enablement for IBM System z

This feature provides prerequisites for the System z specific PCI support.

6.1.4 SE/HMC file access

Files on the USB/DVD drive of the SE/HMC of z Systems hardware can now be accessed.

6.1.5 10GbE RoCE Express Feature

SLES 12 SP1 supports the 10GbE RoCE Express feature on zEC12, zBC12 and IBM z13 via the Ethernet device using TCP/IP traffic without restrictions. Before using this feature on an IBM z13, make sure that the minimum required service is applied: z/VM APAR UM34525 and HW ycode N98778.057 (bundle 14).

SLES 12 SP1 includes RDMA enablement and DAPL/OFED for s390x as a technology preview, but these can only be used on LPAR when running on an IBM zEC12 or zBC12. They cannot be used on an IBM z13.

6.1.6 SMT Base Support for Linux on z Systems

With SLES 12 SP1 Linux on z Systems includes support for SMT (Simultaneous Multi Threading) that is available with the IBM z13 or later hardware.

6.1.7 New cputype Command

Often it is difficult to correlate the commonly used name of the processor with the IBM model number for that processor.

The cputype command is now included in the s390-tools package. The cputype command prints both the IBM model number as well as the more commonly used processor name.

6.2 Virtualization

6.2.1 At the End of the Installation of SLES 12 SP1 in zKVM, Do Not Allow the System to Reboot

At the end of the installation of SLES 12 SP1, you will be asked to reboot the system. However, on zKVM, immediately rebooting means that the installer will start anew.

Instead of rebooting the system, make sure that the system shuts down. You can then adapt your VM configuration for the changed kernel/initrd parameters. After that, you can reboot into the newly installed system as usual.

6.2.2 qclib: Improved Capacity Query for LPARs, z/VM, and KVM

This feature provides a library to determine the capacity of an environment for accounting purposes. Applications can trigger system and capacity details regarding the System, LPAR, z/VM, and KVM environments in terms of available CPU (shares).

If you are using z/VM 6.3, we strongly suggest installing the APAR VM65419 (or higher) to improve the output of qclib.

6.2.3 vhostmd/vm-dump-metrics

Improved performance handling by querying host performance metrics from within VM is now enabled via vm-dump-metrics, which is available in the vhostmd package.

6.2.4 Container support for Docker on IBM z Systems

Provides the infrastructure and tool set to manage and deploy applications based on docker images based on gccgo.

6.2.5 Support of Long Names for Linux Guests

The limitation of eight characters for Linux guest names is removed and long names can be reflected in /proc/sysinfo with up to 255 characters.

6.3 Storage

6.3.1 Reintroduce selected zfcp unit sysfs attributes for scsi devices

Enable to manually trigger LUN recovery and to export debug data for zfcp-attached SCSI devices that are attached by automatic LUN scan.

6.3.2 Enforcing Partitioned DASDs for LVM

Because the first sector of a DASD has a special format it is not possible to use unpartitioned DASDs for LVM. In YaST it was possible to select an unpartitioned DASD in the expert partitioner for LVM but then creating the physical volume failed later on.

YaST now disallows to select an unpartitioned DASD for LVM.

6.4 Network

6.4.1 qeth: Accurate ethtool Output

Provides improved monitoring and service via more timely and accurate display of settings and values via the ethtool when running on hardware that supports the improved query of network cards

6.4.2 Query OSA Address Table

Provide infrastructure to gather and display OSA and TCP/IP configuration information via the "OSA query Address Table" hardware function to ease administration of OSA and TCP/IP configuration information.

6.5 Security

6.5.1 Support of the CEX5S crypto card and more than 16 crypto domains

The cryptographic device driver includes support for the CEX5S crypto card introduced with the IBM z13. The support of crypto domains is extended above 16 domains and now supports up to 256 domains.

6.5.2 In-kernel crypto: DRBG support

The generation of pseudo random numbers include enhanced support for "Deterministic Random Bit Generator" (DRBG) according to updated security specification NIST SP 800-90A.

6.6 Reliability, Availability, Serviceability (RAS)

6.6.1 Linux support for concurrent Flash MCL updates

Enables to apply concurrent hardware microcode level upgrades (MCL) without impacting I/O operations to the Flash storage media and notifies users of the changed Flash hardware service level

6.6.2 Enhanced watchdog support for KVM

Linux on z Systems now includes the DIAG288 watchdog support that provides improved High Availability and RAS characteristics for Linux KVM guests

6.6.3 Improve zfcp auto port scan resiliency

Improved Fibre Channel port scan behaviour of the zfcp device driver to avoid excessive scanning and scan bursts. Scan bursts can occur in large virtual server environments.

6.6.4 OSA-Express5s Cards Support in qethqoat

Support for OSA-Express5s cards has been added to the qethqoat tool, which is part of the s390tools package. This enhancement extends the serviceability of network and card setups for OSA-Express5s cards.

6.7 Performance

6.7.1 Hot-patching Support for Linux on System z Binaries

Hot-patch support in gcc implements support for online patching of multi-threaded code for Linux on System binaries. It is possible to select specific functions for hot-patching using a function attribute and to enable hot-patching for all functions (-mhotpatch) via command line option. Because enabling of hot-patching has negative impact on software size and performance it is recommended to use hot-patching for specific functions and not to enable hot-patch support in general.

For online documentation, see http://gcc.gnu.org/onlinedocs/gcc/ (http://gcc.gnu.org/onlinedocs/gcc/).

6.7.2 Software reference bit handling for memory operations

The memory handling on z Systems is now based on Software reference bit handling, that replaces the z specific "storage key operations". That provides enhanced commonality with other platforms and improved performance especially for Linux running as KVM guest

6.7.3 SIMD instruction support in Linux kernel, dump tools, toolchain and LLVM

The Linux kernel now provides support for user space applications to use SIMD (Single Instruction Multiple Data) instructions that operate on vector registers and are available with IBM z13 and later hardware.

Vector registers are now part of kernel dumps. The crash tool is extended to display the vector registers of the kernel dump and the zgetdump tool is enhanced for converting dump formats.

For gcc support, the add on gcc 5.2 is required. The add-on is part of the tool chain module.

SIMD support has been added to LLVM, resulting in improved performance of the software-emulated 3D graphics stack (mesa / llvmpipe).

6.7.4 SMT support in stand alone dump and dump tools

The stand alone dumper and dump tools include support for SMT instructions that are supported with the IBM z13 and later hardware

6.7.5 IBM z13 instruction support in toolchain

The Linux toolchain, gcc, binutils and gdb now supports the hardware instructions that were introduced with the IBM z13. With these instructions, full hardware functionality and improved performance are available. For gcc support, the add-on gcc 5.2 is required. The add-on is part of the Toolchain Module.

6.7.6 Utilizing the SIMD Vector Instructions introduced with the IBM z Systems z13

User space applications are able to utilize the SIMD vector instructions introduced with the IBM z Systems z13

6.7.7 GDB Transactional Diagnostic Block support

The enhanced GDB support now displays the program exception TDB "Transactional Diagnostic Block" from a core file, as well as during a debug session for debugging code using transactions

6.7.8 The perf Program to Capture Detailed Performance Data

Performance issues sometimes require more details about CPU cycles used by workloads, shared libraries, kernel and device drivers.

With support for the CPU-measurement sampling facility the perf program can be used to capture more detailed performance data. It includes the following key features to sample workloads:

  • Sampling of CPU cycles

  • Basic sampling - snapshot of various PSW bits and instruction address at specific time interval

  • Support for raw sample data - sampling data made available to the perf program that can be used/posted by external applications

  • Support for diagnostic sampling - provides a snapshot of hardware-model dependent information

6.8 Miscellaneous

6.8.1 Architecture Level Set

Newer machines have new instructions and better instruction schedulers.

The default for the system compiler is to generate code for z196 / z114 and do the scheduling for zEC12.

7 Driver Updates

7.1 Other Drivers

7.1.1 Support for New Intel Processors

This Service Pack adds support for the following Intel processors:

  • Intel® Xeon® processor E3 v4 product family

  • Intel® Xeon® processor E5 v4 product family

  • Intel® Xeon® processor E7 v4 product family

  • Intel® Xeon® Processor D-1500 Family

  • 6th Generation Intel® Core™ processor family

8 Packages and Functionality Changes

This section comprises changes to packages, such as additions, updates, removals and changes to the package layout of software. It also contains information about modules available for SUSE Linux Enterprise Server. For information about changes to package management tools, such as Zypper or RPM, see Section 3.2, “Systems Management”.

8.1 New Packages

8.1.1 OpenJDK 8 was added to SLE12 SP1

SLE 12 only supported version 7 of OpenJDK. There was no solution for customers needing a higher version of Java.

In SLE 12 SP1, OpenJDK 8 was added as an alternative Java version. OpenJDK 7 remains available additionally. To choose the right version for your use case, use update-alternatives.

8.1.2 Filtering the systemd Journal with the YaST2 Journal Module

Since SLE 12 SP1, YaST includes a new journal module, which enables users and system administrators to take advantage of the advanced filtering capabilities of the systemd journal.

The new module displays the log entries in a table with a search box providing grep-like live searching. In addition, it allows to filter the entries in the list by date and time, unit, file, or priority.

In short, the module offers all the advantages of the old (and still present) log viewer with some extra systemd powered goodies.

8.1.3 pax Binary Replaced with spax from the star Package

The original provider of the pax binary was the package pax. However, this binary was not LSB-compatible and is not maintained upstream.

In SLE 12 SP1, pax was replaced with spax (from the package star). For backwards compatibility, this package also provides a symbolic link for pax.

The new command spax provides many of the same options that the former pax also offered. However, the following options are not supported: -0, -B, -D, -E, -G, -O, -P, -T, -U, -Y, -Z. spax does not provide options beyond those offered by pax.

Additionally, the formats supported by the option -x have changed:

  • Formats supported by pax -x were: bcpio, cpio, sv4cpio, sv4crc, tar, ustar.

  • Current spax -x formats are: v7tar, tar, star, gnutar, ustar, xstar, xustar, exustar, pax, suntar, bin, cpio, odc, asc, crc.

It is also possible that other options have slightly different behavior than you are used to. For more information, see the spax manual pages.

8.1.4 Additional FreeRADIUS Packages

SLE 12 GA shipped with a freeradius version that was stripped to its core and removed certain functionality that was available in SLE 11.

SLE 12 now ships with additional packages that fix the feature omissions:

  • freeradius-server-python, freeradius-server-perl: Allow writing FreeRadius modules in Python/Perl.

  • freeradius-server-krb5, freeradius-server-ldap: Modules for authentication via Kerberos 5 and LDAP.

  • freeradius-server-mysql, freeradius-server-postgresql, freeradius-server-sqlite: Database backend modules for the frontend SQL module (rlm_sql) in the corresponding SQL databases.

8.1.5 Support for Shibboleth Added

The packages necessary to set up Shibboleth authentication were added.

8.2 Updated Packages

8.2.1 Qt 5 Has Been Updated to 5.5.1

The Qt 5 libraries were updated to 5.5.1. Qt 5.5.1 includes new features and security fixes for known vulnerabilities over Qt 5.3.2 (the version initially shipped in SP1).

Among other security fixes, the new version includes a fix for the Qt WebEngine's Weak Diffie-Hellman vulnerability (CVE-2015-4000).

New features include:

  • Update of Qt WebEngine which updates the includes Chromium snapshot to version 40

  • New modules to extend 3D APIs (Qt Canvas 3D and Qt 3D)

  • Improvements in the QML engine which is the basis of Qt Quick

  • Improvements in the Qt Multimedia module

  • Many other features and bugfixes

8.2.2 Puppet Has Been Updated from 3.6.2 to 3.8.5

Puppet has been updated from 3.6.2 to 3.8.5. All releases between these two versions should only bring Puppet 3 backward-compatible features and bug and security fixes.

For more information, read the following release notes:

In particular, you should pay attention to the following upgrade notes and warnings:

8.2.3 Tar: Extended Attributes

The tar version in SLES and SLED 12 (SP0) was not handling extended attributes properly.

A maintenance update for tar fixes this issue. This update introduces new package dependencies:

  • libacl1

  • libselinux1

Both of these packages are already required by other core packages in a SLE installation.

8.2.4 Wireshark Updated to 1.12.x

The Wireshark 1.10.x series of releases was discontinued upstream and does no longer receive security updates or bug fixes.

Wireshark was updated to the 1.12.x series of releases, providing for delivery of fixes for security issues as well as new and updated protocol support and dissectors.

8.2.5 KSH 93v Replaced with KSH 93u

In the Legacy Module for SUSE Linux Enterprise 12, we shipped KSH 93v. However, the 93v branch was not fully stable yet.

With SLE 12 SP1, we release KSH 93u, which is more stable version 93v. In order to provide a regular update path from 93v to 93u, a higher version number (93vu) has been used for this update.

8.2.6 Upgrading PostgreSQL Installations from 9.1 to 9.4

To upgrade a PostgreSQL server installation from version 9.1 to 9.4, the database files need to be converted to the new version.

Note: System Upgrade from SLE 11

On SLE 12, there are no PostgreSQL 8.4 or 9.1 packages. This means, you first must migrate PostgreSQL from 8.4 or 9.1 to 9.4 on SLE 11 before upgrading the system from SLE 11 to SLE 12.

Newer versions of PostgreSQL come with the pg_upgrade tool that simplifies and speeds up the migration of a PostgreSQL installation to a new version. Formerly, it was necessary to dump and restore the database files which was much slower.

To work, pg_upgrade needs to have the server binaries of both versions available. To allow this, we had to change the way PostgreSQL is packaged as well as the naming of the packages, so that two or more versions of PostgreSQL can be installed in parallel.

Starting with version 9.1, PostgreSQL package names on SUSE Linux Enterprise products contain numbers indicating the major version. In PostgreSQL terms, the major version consists of the first two components of the version number, for example, 9.1, 9.3, and 9.4. So, the packages for PostgreSQL 9.3 are named postgresql93, postgresql93-server, etc. Inside the packages, the files were moved from their standard location to a versioned location such as /usr/lib/postgresql93/bin or /usr/lib/postgresql94/bin. This avoids file conflicts if multiple packages are installed in parallel. The update-alternatives mechanism creates and maintains symbolic links that cause one version (by default the highest installed version) to re-appear in the standard locations. By default, database data is stored under /var/lib/pgsql/data on SUSE Linux Enterprise.

The following preconditions have to be fulfilled before data migration can be started:

  1. If not already done, the packages of the old PostgreSQL version (9.3) must be upgraded to the latest release through a maintenance update.

  2. The packages of the new PostgreSQL major version need to be installed. For SLE 12, this means installing postgresql94-server and all the packages it depends on. Because pg_upgrade is contained in the package postgresql94-contrib, this package must be installed as well, at least until the migration is done.

  3. Unless pg_upgrade is used in link mode, the server must have enough free disk space to temporarily hold a copy of the database files. If the database instance was installed in the default location, the needed space in megabytes can be determined by running the following command as root: du -hs /var/lib/pgsql/data. If space is tight, it might help to run the VACUUM FULL SQL command on each database in the PostgreSQL instance to be migrated which might take very long.

Upstream documentation about pg_upgrade including step-by-step instructions for performing a database migration can be found under file:///usr/share/doc/packages/postgresql94/html/pgupgrade.html (if the postgresql94-docs package is installed), or online under http://www.postgresql.org/docs/9.4/static/pgupgrade.html (http://www.postgresql.org/docs/9.4/static/pgupgrade.html). NOTE: The online documentation explains how you can install PostgreSQL from the upstream sources (which is not necessary on SLE) and also uses other directory names (/usr/local instead of the update-alternatives based path as described above).

For background information about the inner workings of pg_admin and a performance comparison with the old dump and restore method, see http://momjian.us/main/writings/pgsql/pg_upgrade.pdf (http://momjian.us/main/writings/pgsql/pg_upgrade.pdf).

8.2.7 ntp 4.2.8

ntp was updated to version 4.2.8.

  • The ntp server ntpd does not synchronize with its peers anymore and the peers are specified by their host name in /etc/ntp.conf.

  • The output of ntpq --peers lists IP numbers of the remote servers instead of their host names.

Name resolution for the affected hosts works otherwise.

Configure ntpd to not run in chroot mode by setting

NTPD_RUN_CHROOTED="no"

in /etc/sysconfig/ntp. Then restart the service with:

systemctl restart ntpd

Due to the architecture of ntpd, it does not start reliably in a chroot environment. Furthermore, the daemon drops all capabilities except for the one needed to open sockets on reserved ports, so chroot is not required. If policy requirements mandate this, AppArmor can be used to further limit the process in what it can do.

Additional Information

The meaning of some parameters has changed, for example sntp -s is now sntp -S.

After having been deprecated for several years, ntpdc is now disabled by default for security reasons. It can be re-enabled by adding the line enable mode7 to /etc/ntp.conf, but preferably ntpq should be used instead.

8.2.8 Dependency on libHBAAPI Removed from fcoe-utils

The package fcoe-utils used to depend on the back-end libraries libHBAAPI and libhbalinux. The sole purpose of these two libraries is reading informational and statistical data from Sysfs.

The commands fcoeadm and fcping from the package fcoe-utils have been rewritten to directly read the needed information from Sysfs without a third-party back-end library.

8.2.9 MariaDB Packaging Improvements

New Helper Script and systemd Unit Files

SLE 12 SP1 ships with a new /usr/lib/mysql/mysql-systemd-helper script and the following native systemd unit files:

  • mysql.service

  • mysql.target

  • mysql@.service

The reason was to move rc.mysql-multi init script to the systemd services which call mysql-systemd-helper.

Restart on Failure

Within the unit files, the option Restart=on-failure is set. This means that the MariaDB instance is automatically restarted on failure.

Managing Services Using systemd

Instead of chkconfig mysql on (etc.), use native systemd commands now, for example:

systemctl enable mysql

Multiple Instances of MariaDB

After setting multiple instances in /etc/my.cnf, they can be managed with:

mysqld_multi

For more information, see mysqld_multi --help (same as SP0).

8.2.10 Tomcat Updated to Version 8

In Service Pack 1 of SUSE Linux Enterprise 12, Tomcat 7 was replaced by Tomcat 8 to allow access to newer features.

8.2.11 Python Was Updated to Version 2.7.9

The Python script interpreter was updated to version 2.7.9. A key feature is the improved SSL module which can better check X.509 certificates used in TLS/SSL communication.

If certificate validation is enabled, the Python SSL module will no longer work with TLS/SSL installations that rely on self-signed certificates or are set up improperly.

For compatibility reasons, TLS/SSL certificate validation remains disabled by default.

8.2.12 New Package: IBM Java 8

Starting with SLES 12 SP1, IBM Java 8 is available for SLES 12.

For documentation and features see: http://www-01.ibm.com/support/docview.wss?uid=swg21696670 (http://www-01.ibm.com/support/docview.wss?uid=swg21696670)

8.3 Removed and Deprecated Functionality

8.3.1 Docker Compose Has Been Removed from the Containers Module

Docker Compose is not supported as a part of SUSE Linux Enterprise Server 12. While it was temporarily included as a Technology Preview, testing showed that the technology was not ready for enterprise use.

SUSE's focus is on Kubernetes which provides better value in terms of features, extensibility, stability and performance.

8.3.2 Packages Removed with SUSE Linux Enterprise Server 12

The following packages were removed with the major release of SUSE Linux Enterprise Server 12:

8.3.2.1 Nagios Server Now Part of a SUSE Manager Subscription

Support for Icinga (a successor of Nagios) will not be part of the SUSE Linux Enterprise Server 12 subscription.

Fully supported Icinga packages for SUSE Linux Enterprise Server 12 will be available as part of a SUSE Manager subscription. In the SUSE Manager context we will be able to deliver better integration into the monitoring frameworks.

More frequent updates on the monitoring server parts than in the past are planned.

8.3.2.2 YaST Modules Dropped Starting with SUSE Linux Enterprise 12

The following YaST modules or obsolete features of modules are not available in the SUSE Linux Enterprise 12 code base anymore:

  • yast2-phone-services

  • yast2-repair

  • yast2-network: DSL configuration

  • yast2-network: ISDN configuration

  • yast2-network: modem support

  • yast2-backup and yast2-restore

  • yast2-apparmor: incident reporting tools

  • yast2-apparmor: profile generating tools

  • yast2-*creator (moved to SDK)

  • YaST installation into directory

  • yast2-x11

  • yast2-mouse

  • yast2-irda (IrDA)

  • YaST Boot and Installation server modules

  • yast2-fingerprint-reader

  • yast2-profile-manager

8.3.3 Packages Removed with SUSE Linux Enterprise Server 12 SP1

The following packages were removed with the release of SUSE Linux Enterprise Server 12 SP1:

8.3.3.1 Xen: blktap Superseded by blktap2

The blktap driver is no longer maintained upstream and is thus also no longer supported on SLE 12 SP1.

You should now use the blktap2 driver.

8.3.3.2 wpa_supplicant Replaces xsupplicant

In SUSE Linux Enterprise 12 SP1 and 12 SP2, xsupplicant was removed entirely.

For pre-authentication of systems via network (including RADIUS) and specifically wireless connections, install the wpa_supplicant package. wpa_supplicant now replaces xsupplicant. wpa_supplicant provides better stability, security and a broader range of authentication options.

8.3.4 Packages and Features to Be Removed in the Future

8.3.4.1 Deprecate DMSVSMA for snIPL for SLES 12 SP1

The RPC protocol is used with old z/VM versions only that are going out of service.

The support of remote access for snIPL to z/VM hosts via the RPC protocol is being deprecated starting with SLES 12 SP1. It is recommended to use remote access to z/VM hosts via SMAPI, provided by supported z/VM 5.4, and z/VM 6.x versions. For details about setting up your z/VM system for API access see z/VM Systems Management Application Programming, SC24-6234.

8.4 Changes in Packaging and Delivery

8.4.1 Graphviz PDF output Requires X11

In order to be able to create PDFs, graphviz needs the package graphviz-gnome. Therefore, graphviz unconditionally required that package. This is unwanted on systems where X11 is not installed.

graphviz package now only pulls in graphviz-gnome if X11 is installed.

This means that on systems without X11, the graphviz tools cannot create PDFs.

8.5 Modules

Module Name Content Life Cycle
Web and Scripting ModulePHP, Python, Ruby on Rails3 years, ~18 months overlap
Legacy ModuleSendmail, old Java, …3 years
Public Cloud ModulePublic cloud initialization code and toolsContinuous integration
Toolchain ModuleGCCYearly delivery
Advanced Systems Management Modulecfengine, puppet and the new "machinery" toolContinuous integration

For more information about the life cycle of packages contained in modules, see https://scc.suse.com/docs/lifecycle/sle/12/modules.

8.5.1 Available Extensions and Modules

  • SUSE Linux Enterprise High Availability Extension 12 SP1 x86_64

  • SUSE Linux Enterprise High Availability GEO Extension 12 SP1 x86_64

  • SUSE Linux Enterprise Workstation Extension 12 SP1 x86_64

  • SUSE Linux Enterprise Software Development Kit 12 SP1 x86_64

  • Advanced Systems Management Module 12 x86_64

  • Containers Module 12 x86_64

  • Legacy Module 12 x86_64

  • Public Cloud Module 12 x86_64

  • Toolchain Module 12 x86_64

  • Web and Scripting Module 12 x86_64

For more information about Extension and Modules, see the product documentation and SUSE Linux Enterprise Server 12 Modules (https://www.suse.com/docrep/documents/huz0a6bf9a/suse_linux_enterprise_server_12_modules_white_paper.pdf).

9 Technical Information

This section contains information about system limits, a number of technical changes and enhancements for the experienced user.

When talking about CPUs, we use the following terminology:

CPU Socket

The visible physical entity, as it is typically mounted to a motherboard or an equivalent.

CPU Core

The (usually not visible) physical entity as reported by the CPU vendor.

On System z this is equivalent to an IFL.

Logical CPU

This is what the Linux Kernel recognizes as a "CPU".

We avoid the word "thread" (which is sometimes used), as the word "thread" would also become ambiguous subsequently.

Virtual CPU

A logical CPU as seen from within a Virtual Machine.

9.1 Virtualization: Network Devices Supported

SLES 12 supports the following virtualized network drivers:

  • Full virtualization: Intel e1000

  • Full virtualization: Realtek 8139

  • Paravirtualized: QEMU Virtualized NIC Card (virtio, KVM only)

9.2 Virtualization: Devices Supported for Booting

SLE12 support VM guest to boot from:

  • Parallel ATA (PATA/IDE)

  • Advanced Host Controller Interface (AHCI)

  • Floppy Disk Drive (FDD)

  • virtio-blk

  • virtio-scsi

  • Preboot eXecution Environment (PXE) ROMs (for supported Network Interface Cards)

Boot from USB and PCI pass-through devices are not supported.

9.3 Virtualization: Supported Disks Formats and Protocols

The following disk formats support read-write access (RW):

  • raw

  • qed (KVM only)

  • qcow2

The following disk formats support read-only access (RO):

  • vmdk

  • vpc

  • vhd / vhdx

The following protocols can be used for read-only access (RO) to images:

  • http, https

  • ftp, ftps, tftp

When using Xen, the qed format will not be displayed as a selectable storage in virt-manager.

Note
Note: Parameter Unprivileged SG_IO (unpriv_sgio) Is Not Supported

The parameter for unprivileged SG_IO (unpriv_sgio) depends on non-standard kernel patches that are not included in the SLES 12 kernel. Trying to attach a disk using this parameter will result in an error.

9.4 Kernel Limits

http://www.suse.com/products/server/technical-information/#Kernel

This table summarizes the various limits which exist in our recent kernels and utilities (if related) for SUSE Linux Enterprise Server 12.

SLES 12 (3.12) x86_64 s390x ppc64le

CPU bits

64

64

64

max. # Logical CPUs

8192

256

2048

max. RAM (theoretical / certified)

> 1 PiB/64 TiB

10 TiB/256 GiB

1 PiB/64 TiB

max. user-/kernelspace

128 TiB/128 TiB

φ/φ

2 TiB/2 EiB

max. swap space

up to 29 * 64 GB (x86_64) or 30 * 64 GB (other architectures)

max. # processes

1048576

max. # threads per process

Maximum limit depends on memory and other parameters (Tested with more than 120000).

max. size per block device

and up to 8 EiB on all 64-bit architectures

FD_SETSIZE

1024

9.5 KVM Limits

SLES 12 GA Virtual Machine (VM) Limits

Max VMs per host

unlimited (total number of virtual CPUs in all guests being no greater than 8 times the number of CPU cores in the host)

Maximum Virtual CPUs per VM

160

Maximum Memory per VM

4 TiB

Maximum Virtual Block Devices per VM

20 virtio-blk, 4 IDE

Maximum number of Network Card per VM

8

Virtual Host Server (VHS) limits are identical to SUSE Linux Enterprise Server.

9.6 Xen Limits

Since SUSE Linux Enterprise Server 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.

SLES 12 GA Virtual Machine (VM) Limits

Maximum VMs per host

64

Maximum Virtual CPUs per VM

64

Maximum Memory per VM

16 GiB x86_32, 511 GiB x86_64

Max Virtual Block Devices per VM

100 PV, 100 FV with PV drivers, 4 FV (Emulated IDE)

SLES 12 GA Virtual Host Server (VHS) Limits

Maximum Physical CPUs

256

Maximum Virtual CPUs

256

Maximum Physical Memory

5 TiB

Maximum Dom0 Physical Memory

500 GiB

Maximum Block Devices

12,000 SCSI logical units

Maximum iSCSI Devices

128

Maximum Network Cards

8

Maximum VMs per CPU Core

8

Maximum VMs per VHS

64

Maximum Virtual Network Cards

64 across all VMs in the system

In Xen 4.4, the hypervisor bundled with SUSE Linux Enterprise Server 12, Dom0 is able to see and handle a maximum of 512 logical CPUs. The hypervisor itself, however, can access up to logical 256 logical CPUs and schedule those for the VMs.

For more information about acronyms please refer to the official Virtualization Documentation.

  • PV: Para Virtualization

  • FV: Full Virtualization

9.7 File Systems

https://www.suse.com/products/server/technical-information/#FileSystem

9.7.1 Protection of Hard Links and Symbolic Links Enabled by Default

A long-standing class of security issues are time-of-check/time-of-use races of symbolic links and hard links, most commonly seen in world-writable directories like /tmp. The common method of exploitation of this flaw is to cross privilege boundaries when following a given symbolic link or hard link. That is, having a root process follow a link created by another user.

Additionally, on systems without separated partitions, unauthorized users could "pin" vulnerable setuid/setgid files against being upgraded by the administrator by creating hard links to them or linking to special files.

In /usr/lib/sysctl.d/50-default.conf, this behavior can now be adjusted using the options fs.protected_hardlinks and fs.protected_symlinks:

  • When set to 0, links can be followed unrestrictedly.

  • When set to 1, links are permitted to be followed only when outside a sticky world-writable directory matches, or when the uid of the link and follower match, or when the directory owner matches the link's owner.

SUSE now sets this to 1 by default.

9.7.2 Btrfs: Compression Support

On-the-fly compression is now supported with Btrfs. It can be turned on through mount-time options passed to the filesystem. See the Btrfs section in mount(8) manual page for details.

9.7.3 Comparison of Supported File Systems

SUSE Linux Enterprise was the first enterprise Linux distribution to support journaling file systems and logical volume managers back in 2000. Later we introduced xfs to Linux, which today is seen as the primary work horse for large-scale file systems, systems with heavy load and multiple parallel read- and write-operations. With SUSE Linux Enterprise 12 we are going the next step of innovation and are using the Copy on Write file system btrfs as the default for the operating system, to support system snapshots and rollback.

+: supported; -: unsupported.

Feature Btrfs XFS Ext4 Reiserfs OCFS 2 **

Data/Metadata Journaling

N/A

- / +

 

- / +

- / +

Journal internal/external

N/A

+ / +

+ / -

  

Offline extend/shrink

+ / +

- / -

+ / +

+ / -

 

Online extend/shrink

+ / +

+ / -

+ / -

+ / -

+ / -

Inode-Allocation-Map

B-tree

B+-tree

table

u. B*-tree

table

Sparse Files

+

    

Tail Packing

+

-

+

-

 

Defrag

+

-

   

ExtAttr / ACLs

+ / +

    

Quotas

+

    

Dump/Restore

-

+

-

  

Blocksize default

4KiB

max. Filesystemsize [1]

16 EiB

8 EiB

1 EiB

16 TiB

4 PiB

max. Filesize [1]

16 EiB

8 EiB

1 EiB

1 EiB

4 PiB

Support Status

SLE

SLE

SLE

SLE

SLE HA

 

* Btrfs is copy-on-write file system. Rather than journaling changes before writing them in-place, it writes them to a new location, then links it in. Until the last write, the new changes are not "committed". Due to the nature of the filesystem, quotas are implemented based on subvolumes ("qgroups"). The blocksize default varies with different host architectures. 64KiB is used on ppc64le, 4KiB on most other systems. The actual size used can be checked with the command "getconf PAGE_SIZE".

 

** OCFS2 is fully supported as part of the SUSE Linux Enterprise High Availability Extension.

 

*** Reiserfs is supported for existing filesystems, the creation of new reiserfs file systems is discouraged.

The maximum file size above can be larger than the file system's actual size due to usage of sparse blocks. Note that unless a file system comes with large file support (LFS), the maximum file size on a 32-bit system is 2 GB (2^31 bytes). Currently all of our standard file systems (including ext3 and ReiserFS) have LFS, which gives a maximum file size of 2^63 bytes in theory. The numbers in the above tables assume that the file systems are using 4 KiB block size. When using different block sizes, the results are different, but 4 KiB reflects the most common standard.

In this document: 1024 Bytes = 1 KiB; 1024 KiB = 1 MiB; 1024 MiB = 1 GiB; 1024 GiB = 1 TiB; 1024 TiB = 1 PiB; 1024 PiB = 1 EiB. See also http://physics.nist.gov/cuu/Units/binary.html.

NFSv4 with IPv6 is only supported for the client side. A NFSv4 server with IPv6 is not supported.

This version of Samba delivers integration with Windows 7 Active Directory Domains. In addition we provide the clustered version of Samba as part of SUSE Linux Enterprise High Availability 11 SP3.

9.7.4 Supported Btrfs Features

The following table lists supported and unsupported Btrfs features across multiple SLES versions.

+ supported
unsupported

FeatureSLES 11 SP4SLES 12 GASLES 12 SP1
Copy on Write+++
Snapshots/Subvolumes+++
Metadata Integrity+++
Data Integrity+++
Online Metadata Scrubbing+++
Automatic Defragmentation
Manual Defragmentation+++
In-band Deduplication
Out-of-band Deduplication+++
Quota Groups+++
Metadata Duplication+++
Multiple Devices++
RAID 0++
RAID 1++
RAID 10++
RAID 5
RAID 6
Hot Add/Remove++
Device Replace
Seeding Devices
Compression+
Big Metadata Blocks++
Skinny Metadata++
Send Without File Data++
Send/Receive
Inode Cache
Fallocate with Hole Punch

11 Colophon

Thanks for using SUSE Linux Enterprise Server in your business.

The SUSE Linux Enterprise Server Team.

Print this page