Release Notes for SUSE Linux Enterprise Server 11 Service Pack 3 (SP3)

This document provides guidance and an overview to high level general features and updates for SUSE Linux Enterprise Server 11 Service Pack 3 (SP3). Besides architecture or product-specific information, it also describes the capabilities and limitations of SLES 11 SP3. General documentation may be found at: http://www.suse.com/documentation/sles11/.

Publication date: 06/21/2017, Version: Version 11.3.47 (2017-05-31)
1 How to Obtain Source Code
2 SUSE Linux Enterprise Server
3 Important Upgrade Information
4 Support Statement for SUSE Linux Enterprise Server
4.1 Erasing All Registration Data
4.2 General Support Statement
4.3 Software Requiring Specific Contracts
4.4 Technology Previews
5 Installation
5.1 Installing the open-fcoe Package Manually
5.2 Current Limitations in a UEFI Secure Boot Context
5.3 UEFI Secure Boot
5.4 Support for 4 KB/Sector Hard Disk Drives
5.5 Network Installation from HTTPS
5.6 UEFI 2.3.1 Support
5.7 Installation via USB
5.8 Mapping Network Interface Names to Names Written on the Chassis (biosdevname)
5.9 Amazon EC2 Availability
5.10 Deployment
5.11 CJK Languages Support in Text-mode Installation
5.12 Booting from Harddisks larger than 2 TiB in Non-UEFI Mode
5.13 Installation Using Persistent Device Names
5.14 iSCSI Booting with iBFT in UEFI Mode
5.15 Using iSCSI Disks when Installing
5.16 Using qla3xxx and qla4xxx Drivers at the Same Time
5.17 Using EDD Information for Storage Device Identification
5.18 Automatic Installation with AutoYaST in an LPAR (System z)
5.19 Adding DASD or zFCP Disks During Installation (System z)
5.20 Network Installation via eHEA on POWER
5.21 For More Information
6 Features and Versions
6.1 Linux Kernel and Toolchain
6.2 Server
6.3 Desktop
6.4 Security
6.5 Network
6.6 Resource Management
6.7 Systems Management
6.8 Other
7 Driver Updates
7.1 X.Org: fbdev Used in UEFI Secure Boot Mode (ASpeed Chipset)
7.2 X.Org Driver Used in UEFI Secure Boot Mode (Matrox)
7.3 Support for Intel 2nd Generation of Atom Microserver
7.4 Network Drivers
7.5 Storage Drivers
7.6 Other Drivers
8 Other Updates
8.1 Update of PostgreSQL to Version 9.4
8.2 Updating to Firefox 24 ESR
8.3 Package python-ethtool
8.4 Update Python to 2.6.8
8.5 List of Updated Packages
9 Software Development Kit
9.1 Optional GCC Compiler Suite on SDK
10 Update-Related Notes
10.1 General Notes
10.2 Update from SUSE Linux Enterprise Server 11
10.3 Update from SUSE Linux Enterprise Server 11 SP1
10.4 Update from SUSE Linux Enterprise Server 11 SP2
11 Deprecated Functionality
11.1 X.Org Driver Used in UEFI Secure Boot Mode (Matrox)
11.2 Support for the JFS File System
11.3 Support for Portmap to End with SLE 11 SP3
11.4 L3 Support for Openswan Is Scheduled to Expire
11.5 PHP 5.2 Is Deprecated
11.6 Packages Removed with SUSE Linux Enterprise Server 11 SP3
11.7 Packages Removed with SUSE Linux Enterprise Server 11 Service Pack 2
11.8 Packages Removed with SUSE Linux Enterprise Server 11 Service Pack 1
11.9 Packages Removed with SUSE Linux Enterprise Server 11
11.10 Packages and Features to Be Removed in the Future
12 Infrastructure, Package and Architecture Specific Information
12.1 16TB memory support for PPC64
12.2 Systems Management
12.3 Performance Related Information
12.4 Storage
12.5 Hyper-V
12.6 Architecture Independent Information
12.7 AMD64/Intel64 64-Bit (x86_64) and Intel/AMD 32-Bit (x86) Specific Information
12.8 Intel Itanium (ia64) Specific Information
12.9 POWER (ppc64) Specific Information
12.10 System z (s390x) Specific Information
13 Resolved Issues
14 Technical Information
14.1 Kernel Limits
14.2 KVM Limits
14.3 Xen Limits
14.4 File Systems
14.5 Kernel Modules
14.6 IPv6 Implementation and Compliance
14.7 Other Technical Information
15 Documentation and Other Information
15.1 Additional or Updated Documentation
15.2 Product and Source Code Information
16 Miscellaneous
17 Legal Notices

1 How to Obtain Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, Novell will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@novell.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. Novell may charge a reasonable fee to recover distribution costs.

2 SUSE Linux Enterprise Server

SUSE Linux Enterprise Server is a highly reliable, scalable, and secure server operating system, built to power mission-critical workloads in both physical and virtual environments. It is an affordable, interoperable, and manageable open source foundation. With it, enterprises can cost-effectively deliver core business services, enable secure networks, and simplify the management of their heterogeneous IT infrastructure, maximizing efficiency and value.

The only enterprise Linux recommended by Microsoft and SAP, SUSE Linux Enterprise Server is optimized to deliver high-performance mission-critical services, as well as edge of network, and web infrastructure workloads.

Designed for interoperability, SUSE Linux Enterprise Server integrates into classical Unix as well as Windows environments, supports open standard CIM interfaces for systems management, and has been certified for IPv6 compatibility,

This modular, general purpose operating system runs on five processor architectures and is available with optional extensions that provide advanced capabilities for tasks such as real time computing and high availability clustering.

SUSE Linux Enterprise Server is optimized to run as a high performing guest on leading hypervisors and supports an unlimited number of virtual machines per physical system with a single subscription, making it the perfect guest operating system for virtual computing.

SUSE Linux Enterprise Server is backed by award-winning support from SUSE, an established technology leader with a proven history of delivering enterprise-quality support services.

With the release of SUSE Linux Enterprise Server 11 Service Pack 3 the former SUSE Linux Enterprise Server 11 Service Pack 2 enters the 6 month migration window, during which time SUSE will continue to provide security updates and full support and maintenance. At the end of the six-month parallel support period, on 2014-01-31, general support for SUSE Linux Enterprise Server 11 Service Pack 2 will be discontinued. Long Term Service Pack Support (LTSS) for SUSE Linux Enterprise Server 11 Service Pack 2 is available as a separate add-on.

3 Important Upgrade Information

For users upgrading from a previous SUSE Linux Enterprise Server release it is recommended to review:

Installation Quick Start and Deployment Guides can be found in the docu language directories on the media. Documentation (if installed) is available below the /usr/share/doc/ directory of an installed system.

These Release Notes are identical across all architectures, and the most recent version is always available online at http://www.suse.com/releasenotes/. Some entries are listed twice, if they are important and belong to more than one section.

4 Support Statement for SUSE Linux Enterprise Server

To receive support, customers need an appropriate subscription with SUSE; for more information, see http://www.suse.com/products/server/services-and-support/.

4.1 Erasing All Registration Data

Sometimes you may want to remove all data that was created during the registration of a SUSE Linux Enterprise system, so you can cleanly re-register it with different credentials.

This can now be accomplished with suse_register by using the new option "--erase-local-regdata". Note that this does not free the subscription that the system may have consumed in the Customer Center. This needs to be done from the Customer Center's Web UI.

4.2 General Support Statement

The following definitions apply:

L1

Problem determination, which means technical support designed to provide compatibility information, usage support, on-going maintenance, information gathering and basic troubleshooting using available documentation.

L2

Problem isolation, which means technical support designed to analyze data, duplicate customer problems, isolate problem area and provide resolution for problems not resolved by Level 1 or alternatively prepare for Level 3.

L3

Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.

For contracted customers and partners, SUSE Linux Enterprise Server 11 will be delivered with L3 support for all packages, except the following:

  • technology previews

  • sound, graphics, fonts and artwork

  • packages that require an additional customer contract

  • packages provided as part of the Software Development Kit (SDK)

SUSE will only support the usage of original (e.g., unchanged or un-recompiled) packages.

4.2.1 lxc-attach Is Not Supported on SLES 11 (Any Service Pack)

lxc-attach is not functional and not supported under SLES 11 (any Service Pack).

lxc-attach is looking for /proc/[0-9]+/ns/{pid|mnt} which have only been available since Linux kernel 3.8. However, SLES 11 uses Linux kernel 3.0.

4.2.2 Support for the btrfs File System

Btrfs is a copy-on-write (CoW) general purpose file system. Based on the CoW functionality, btrfs provides snapshoting. Beyond that data and metadata checksums improve the reliability of the file system. btrfs is highly scalable, but also supports online shrinking to adopt to real-life environments. On appropriate storage devices btrfs also supports the TRIM command.

Support

With SUSE Linux Enterprise 11 SP2, the btrfs file system joins ext3, reiserfs, xfs and ocfs2 as commercially supported file systems. Each file system offers disctinct advantages. While the installation default is ext3, we recommend xfs when maximizing data performance is desired, and btrfs as a root file system when snapshotting and rollback capabilities are required. Btrfs is supported as a root file system (i.e. the file system for the operating system) across all architectures of SUSE Linux Enterprise 11 SP2. Customers are advised to use the YaST partitioner (or AutoYaST) to build their systems: YaST will prepare the btrfs file system for use with subvolumes and snapshots. Snapshots will be automatically enabled for the root file system using SUSE's snapper infrastructure. For more information about snapper, its integration into ZYpp and YaST, and the YaST snapper module, see the SUSE Linux Enterprise documentation.

Migration from "ext" File Systems to btrfs

Migration from existing "ext" file systems (ext2, ext3, ext4) is supported "offline" and "in place". Calling "btrfs-convert [device]" will convert the file system. This is an offline process, which needs at least 15% free space on the device, but is applied in place. Roll back: calling "btrfs-convert -r [device]" will roll back. Caveat: when rolling back, all data will be lost that has been added after the conversion into btrfs; in other words: the roll back is complete, not partial.

RAID

Btrfs is supported on top of MD (multiple devices) and DM (device mapper) configurations. Please use the YaST partitioner to achieve a proper setup. Multivolume/RAID with btrfs is not supported yet and will be enabled with a future maintenance update.

Future Plans

  • We are planning to announce support for btrfs' built-in multi volume handling and RAID in a later version of SUSE Linux Enterprise.

  • Starting with SUSE Linux Enterprise 12, we are planning to implement bootloader support for /boot on btrfs.

  • Compression and Encryption functionality for btrfs is currently under development and will be supported once the development has matured.

  • We are commited to actively work on the btrfs file system with the community, and we keep customers and partners informed about progress and experience in terms of scalability and performance. This may also apply to cloud and cloud storage infrastructures.

Online Check and Repair Functionality

Check and repair functionality ("scrub") is available as part of the btrfs command line tools. "Scrub" is aimed to verify data and metadata assuming the tree structures are fine. "Scrub" can (and should) be run periodically on a mounted file system: it runs as a background process during normal operation.

The "fsck.btrfs" tool is available in the SUSE Linux Enterprise update repositories.

Capacity Planning

If you are planning to use btrfs with its snapshot capability, it is advisable to reserve twice as much disk space than the standard storage proposal. This is automatically done by the YaST2 partitioner for the root file system.

Hard Link Limitation

In order to provide a more robust file system, btrfs incorporates back references for all file names, eliminating the classic "lost+found" directory added during recovery. A temporary limitation of this approach affects the number of hard links in a single directory that link to the same file. The limitation is dynamic based on the length of the file names used. A realistic average is approximately 150 hard links. When using 255 character file names, the limit is 14 links. We intend to raise the limitation to a more usable limit of 65535 links in a future maintenance update.

Note
Note

With SLE 11 SP3 you can now raise this limitation. The so-called extended inode refs are not turned on by default in the SUSE kernels. This is because enabling them involves turning on an incompat bit in the file system which would make it unmountable by old versions of SLE.

If you want extended inode refs on though use 'btrfstune' to turn them on. There is no way to turn them off so it is a 1-way conversion. The command is (replace /PATH/TO/DEVICE with your device):

btrfstune -r /PATH/TO/DEVICE

Other Limitations

At the moment, btrfs is not supported as a seed device.

For More Information

For more information about btrfs, see the SUSE Linux Enterprise 11 documentation.

4.2.3 Tomcat6 and Related Packages

Tomcat6 and related packages are fully supported on the Intel/AMD x86 (32bit), AMD64/Intel64, IBM POWER, and IBM System z architectures.

4.2.4 SELinux

The SELinux subsystem is supported. Arbitrary SELinux policies running on SLES are not supported, though. Customers and Partners who have an interest in using SELinux in their solutions, are encouraged to contact SUSE to evaluate the level of support that is needed, and how support and services for the specific SELinux policies will be granted.

4.3 Software Requiring Specific Contracts

The following packages require additional support contracts to be obtained by the customer in order to receive full support:

  • BEA Java (Itanium only)

  • MySQL Database

  • PostgreSQL Database

4.4 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE. These features are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are mainly included for customer convenience and give customers a chance to test new technologies within an enterprise environment.

Whether a technical preview will be moved to a fully supported package later, depends on customer and market feedback. A technical preview does not automatically result in support at a later point in time. Technical previews could be dropped at any time and SUSE is not committed to provide a technical preview later in the product cycle.

Please, give your SUSE representative feedback, including your experience and use case.

4.4.1 Technology preview: QEMU: Include virtio-blk-data-plane

The virtio-blk-data-plane is a new experimental performance feature for KVM. It provides a streamlined block IO path which favors performance over functionality.

4.4.2 Technology Preview: KVM Nested Virtualization with Intel VT

The KVM kernel module "kvm_intel" now has the nested parameter available, achieving parity with the "kvm_amd" kernel module with respect to nested virtualization capabilities.

4.4.3 Technology Preview: libguestFS

Libguestfs is a set of tools for accessing and modifying virtual machine disk images. It can be used for many virtual image managements tasks such as viewing and editing files inside guests (only Linux one are enable), scripting changes to VMs, monitoring disk used/free statistics, performing partial backups, and cloning VMs. See http://libguestfs.org/ for more information.

4.4.4 Technology preview: KVM support on s390x

KVM is now included on the s390x platform as a technology preview.

4.4.5 Technology Preview: Hot-Add Memory

Hot-add memory is currently only supported on the following hardware:

  • IBM x3800, x3850, single node x3950, x3850 M2, single node x3950 M2,

  • certified systems based on recent Intel Xeon Architecture,

  • certified systems based on recent Intel IPF Architecture,

  • all IBM servers and blades with POWER5, POWER6, POWER7, or POWER7+ processors and recent firmware. (This requires the Power Linux service and productivity tools available at http://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/yum.html.)

If your specific machine is not listed, please call SUSE support to confirm whether or not your machine has been successfully tested. Also, regularly check our maintenance update information, which will explicitly mention the general availability of this feature.

Restriction on using IBM eHCA InfiniBand adapters in conjunction with hot-add memory on IBM Power:

The current eHCA Device Driver will prevent dynamic memory operations on a partition as long as the driver is loaded. If the driver is unloaded prior to the operation and then loaded again afterwards, adapter initialization may fail. A Partition Shutdown / Activate sequence on the HMC may be needed to recover from this situation.

4.4.6 Technology Preview: Internet Storage Naming Service (iSNS)

The Internet Storage Naming Service (iSNS) package is by design suitable for secure internal networks only. SUSE will continue to work with the community on improving security.

4.4.7 Technology Preview: Read-Only Root File System

It is possible to run SUSE Linux Enterprise Server 11 on a shared read-only root file system. A read-only root setup consists of the read-only root file system, a scratch and a state file system. The /etc/rwtab file defines which files and directories on the read-only root file system are replaced by which files on the state and scratch file systems for each system instance.

The readonlyroot kernel command line option enables read-only root mode; the state= and scratch= kernel command line options determine the devices on which the state and scratch file systems are located.

In order to set up a system with a read-only root file system, set up a scratch file system, set up a file system to use for storing persistent per-instance state, adjust /etc/rwtab as needed, add the appropriate kernel command line options to your boot loader configuration, replace /etc/mtab with a symlink to /proc/mounts as described below, and (re)boot the system.

To replace /etc/mtab with the appropriate symlinks, call:

ln -sf /proc/mounts /etc/mtab

See the rwtab(5) manual page for further details and http://www.redbooks.ibm.com/abstracts/redp4322.html for limitations on System z.

5 Installation

5.1 Installing the open-fcoe Package Manually

Manually installing the open-fcoe package on a system without FCoE cards can disrupt the boot process.

Install open-fcoe only if your system has FCoE cards (in which case the installer would install it for you automatically).

An upcoming maintenance update will address this issue in the open-fcoe package.

5.2 Current Limitations in a UEFI Secure Boot Context

When booting in Secure Boot mode, the following restrictions apply:

  • bootloader, kernel and kernel modules must be signed

  • kexec and kdump are disabled

  • hibernation (suspend on disk) is disabled

  • access to /dev/kmem and /dev/mem is not possible, even as root user

  • access to IO port is not possible, even as root user. All X11 graphical drivers must use a kernel driver

  • PCI BAR access through sysfs is not possible

  • 'custom_method' in ACPI is not available

  • debugfs for "asus-wmi" module is not available

  • 'acpi_rsdp' parameter doesn't have any effect on kernel

5.3 UEFI Secure Boot

SLES 11 SP3 and SLED 11 SP3 implement UEFI Secure Boot. Installation media supports Secure Boot. Secure Boot is only supported on new installations (not on upgraded systems from older SLES 11 installations), if Secure Boot flag is enabled in the UEFI firmware at installation time.

For more informations, see Administration Guide, section Secure Boot.

5.4 Support for 4 KB/Sector Hard Disk Drives

Support for 4 KB/sector hard disk drives requires support from all code that directly accesses the hard disk drives.

SUSE Linux Enterprise fully supports 4 KB/sector drives in all conditions and architectures with one exception. The 4KB/sector hard disk drives are not supported as a boot drive on x86_64 systems booting with a legacy BIOS.

5.5 Network Installation from HTTPS

In some environments it is desirable not to have any unencrypted network connection to the installation server. This was not possible until SLE 11 SP2.

Since SLE 11 SP3, the HTTPS URL scheme is also supported and the installation can be done completely via HTTPS.

When using HTTPS, if you are unable to access the network, specify sslcerts=0 as a boot option to disable certificate checking.

5.6 UEFI 2.3.1 Support

SP3 is supporting booting systems following UEFI specification up to version 2.3.1 errata C.

Note: Installing SLE 11 SP3 on Apple hardware is not supported.

5.7 Installation via USB

With SLE 11 SP3 it is possible to dump the DVD1 ISO file to a USB stick and install from that (given that your BIOS supports it). This will only work for DVD1 and not with UEFI.

On UEFI systems, the DVD1 ISO file must first be adapted using the "isohybrid" tool (from SLES 11 SP3 or later), by running "isohybrid --uefi file_name.iso" before dumping the modified ISO file to a USB stick.

5.8 Mapping Network Interface Names to Names Written on the Chassis (biosdevname)

This feature addresses the issue that eth0 does not map to em1 (as labeled on server chassis), when a server has multiple network adapters.

This issue is solved for Dell hardware, which has the corresponding BIOS support, by renaming onboard network interfaces to em[1234], which maps to Embedded NIC[1234] as labeled on server chassis. (em stands for ethernet-on-motherboard.)

The renaming will be done by using the biosdevname utility.

biosdevname is automatically installed and used if YaST2 detects hardware suitable to be used with biosdevname. biosdevname can be disabled during installation by using "biosdevname=0" on the kernel commandline. The usage of biosdevname can be enforced on every hardware with "biosdevname=1". If the BIOS has no support, no network interface names are renamed.

5.9 Amazon EC2 Availability

SUSE Linux Enterprise Server 11 SP2 is available immediately for use on Amazon Web Services EC2. For more information about Amazon EC2 Running SUSE Linux Enterprise Server, please visit http://aws.amazon.com/suse

5.10 Deployment

SUSE Linux Enterprise Server can be deployed in three ways:

  • Physical Machine,

  • Virtual Host,

  • Virtual Machine in paravirtualized environments.

5.11 CJK Languages Support in Text-mode Installation

CJK (Chinese, Japanese, and Korean) languages do not work properly during text-mode installation if the framebuffer is not used (Text Mode selected in boot loader).

There are three alternatives to resolve this issue:

  1. Use English or some other non-CJK language for installation then switch to the CJK language later on a running system using YaST › System › Language.

  2. Use your CJK language during installation, but do not choose Text Mode in the boot loader using F3 Video Mode. Select one of the other VGA modes instead. Select the CJK language of your choice using F2 Language, add textmode=1 to the boot loader command-line and start the installation.

  3. Use graphical installation (or install remotely via SSH or VNC).

5.12 Booting from Harddisks larger than 2 TiB in Non-UEFI Mode

Booting from harddisks larger than 2 TiB in non-UEFI mode (but with GPT partition table) fails.

To successfully use harddisks larger than 2 TiB in non-UEFI mode, but with GPT partition table (i.e., grub bootloader), consider one of the following options:

  • Use a 4k sector harddisk in 4k mode (in this case, the 2 TiB limit will become a 16 TiB limit).

  • Use a separate /boot partition. This partition must be one of the first 3 partitions and end below the 2 TiB limit.

  • Switch from legacy mode to UEFI mode, if this is an option for you.

5.13 Installation Using Persistent Device Names

The installer uses persistent device names by default. If you plan to add storage devices to your system after the installation, we strongly recommend you use persistent device names for all storage devices.

To switch to persistent device names on a system that has already been installed, start the YaST2 partitioner. For each partition, select Edit and go to the Fstab Options dialog. Any mount option except Device name provides you persistent device names. In addition, rerun the Boot Loader module in YaST and select Propose New Config to switch the boot loader to using the persistent device name, or manually adjust all boot loader sections. Then select Finish to write the new proposed configuration to disk. Alternatively, edit /boot/grub/menu.lst and /boot/grub/device.map according to your needs.

This needs to be done before adding new storage devices.

For further information, see the Storage Administration Guide about "Device Name Persistence".

5.14 iSCSI Booting with iBFT in UEFI Mode

If booting over iSCSI, iBFT information cannot be parsed when booting via native UEFI. The system should be configured to boot in legacy mode if iSCSI booting using iBFT is required.

5.15 Using iSCSI Disks when Installing

To use iSCSI disks during installation, passing the withiscsi boot parameter is no longer needed.

During installation, an additional screen provides the option to attach iSCSI disks to the system and use them in the installation process.

Booting from an iSCSI server on i386, x86_64 and ppc64 is supported if iSCSI-enabled firmware is used.

5.16 Using qla3xxx and qla4xxx Drivers at the Same Time

QLogic iSCSI Expansion Card for IBM BladeCenter provides both Ethernet and iSCSI functions. Some parts on the card are shared by both functions. The current qla3xxx (Ethernet) and qla4xxx (iSCSI) drivers support Ethernet and iSCSI function individually. In contrast to previous SLES releases, using both functions at the same time is now supported.

If you happen to use brokenmodules=qla3xxx or brokenmodules=qla4xxx before upgrading to SLES 11 SP2, these options can be removed.

5.17 Using EDD Information for Storage Device Identification

EDD information (in /sys/firmware/edd/<device>) is used by default to identify your storage devices.

EDD Requirements:

  • BIOS provides full EDD information (found in /sys/firmware/edd/<device>)

  • Disks are signed with a unique MBR signature (found in /sys/firmware/edd/<device>/mbr_signature).

Add edd=off to the kernel parameters to disable EDD.

5.18 Automatic Installation with AutoYaST in an LPAR (System z)

For automatic installation with AutoYaST in an LPAR, the parmfile used for such an installation must have blank characters at the beginning and at the end of each line (the first line does not need to start with a blank). The number of characters in one line should not exceed 80.

5.19 Adding DASD or zFCP Disks During Installation (System z)

Adding of DASD or zFCP disks is not only possible during the installation workflow, but also when the installation proposal is shown. To add disks at this stage, please click on the Expert tab and scroll down. There the DASD and/or zFCP entry is shown. These added disks are not displayed in the partitioner automatically. To make the disks visible in the partitioner, you have to click on Expert and select reread partition table. This may reset any previously entered information.

5.20 Network Installation via eHEA on POWER

If you want to carry out a network installation via the IBM eHEA Ethernet Adapter on POWER systems, no huge (16GB) pages may be assigned to the partition during installation.

5.21 For More Information

For more information, see Chapter 12, Infrastructure, Package and Architecture Specific Information.

6 Features and Versions

6.1 Linux Kernel and Toolchain

6.1.1 New Kernel Taint Flag - Unsigned Module

In the past modules without proper cryptographic signature loaded into kernel caused the kernel's "forced module load" flag to be set. As a result, tracing was disabled for such modules.

A flag has been introduced to indicate modules with invalid signature. Its internal kernel name is TAINT_UNSIGNED_MODULE, and is represented with letter 'E' in kernel debugging output.

6.1.2 vDSO for getcpu and glibc vDSO functions

Previous implementations of vDSO for getcpu and gettimeofday are costly in terms of processor cycles.

The new functions of vDSO for getcpu and gettimeofday mitigate this issues and allow applications to run with improved performance.

6.1.3 Lustre 2.1 Kernel Modules Preparation

Lustre 2.1 builds of kernel modules by 3rd parties needed kernel modifications of the previous shipped SUSE Kernel, and thus breaking the support chain.

To allow the build of kernel modules for Lustre 2.1 by 3rd parties without breaking the support chain for the SUSE Kernel, the needed hooks for Lustre were added to the shipped kernel.

This change does not include Lustre modules or packages, nor support.

6.1.4 Kernel Dumps with LZO Compression

Dumping on large machines (i.e. with terabytes of RAM) takes unreasonably long.

The default and recommended settings for kdump have changed in SP3 to provide faster dumping on machines with a large amount of RAM. This includes changing the default dump level to 31 (filter out as much as possible) and using the LZO compression algorithm. LZO is much faster than GZIP while offering a similar compression ratio for typical dumps.

yast2-kdump and crash also support LZO compression as a new target format.

6.1.5 Add Option to mpstat to Only Display Stats for Online CPUs

mpstat added the "-P ON" option to limit statistics displayed to only online CPUs.

6.1.6 Support for Failopen Mode When Using Netfilter's NFQUEUE Target

Adds support for a new failopen mode when using netfilter's NFQUEUE target. This mode allows users to temporarily disable packet inspection and maintain connectivity under heavy network traffic.

6.1.7 libvirt Support for QEMU seccomp Sandboxing

QEMU guests spawned by libvirt are exposed to a large number of system calls that go unused for the entire lifetime of the process.

libvirt's qemu.conf file is updated with a seccomp_sandbox option that can be used to enable use of QEMU's seccomp sandboxing support. This allows execution of QEMU guests with reduced exposure to kernel system calls.

6.1.8 Makedumpfile: Enhanced Elimination of Sensitive Data from Dumps

Enhances the current makedumpfile filtering to eliminate complex data structures and cryptographic keys.

This is possible thanks to support for eppic language. This feature enables us to specify rules to scrub data in a dumpfile with eppic macro instead of the current configuration file (makedumpfile.conf).

6.1.9 USB3 Power Savings Features

USB3 Link Power Management and Latency Tolerance Messaging have been implemented for improved power efficiency.

6.1.10 Support for Latest Intel Active Management Technology (AMT)

This Servicepack adds support for Intel AMT version 7 and later by providing the Intel MEI kernel driver.

In order to use Intel AMT, you also must download the Intel LMS and ACUConfig components from Intels website:

For more information on AMT on Linux, please follow the URLs in the "Additional Information" found on the above mentioned Intel website.

6.1.11 General Version Information

  • GCC 4.3.4

  • glibc 2.11.3

  • Linux kernel 3.0

  • perl 5.10

  • php 5.3

  • python 2.6.8

  • ruby 1.8.7

6.1.12 SUSE Linux Enterprise Real Time Extension

To take advantage of the Real Time extension the extension must be at the same version as the base SUSE Linux Enterprise Server. An updated version for SUSE Linux Enterprise Real Time extension is provided later after the release of SUSE Linux Enterprise Server.

6.2 Server

Note
Note

Note: in the following text version numbers do not necessarily give the final patch- and security-status of an application, as SUSE may have added additional patches to the specific version of an application.

6.2.1 SGI UV Plattform Enablement

This SP adds basic support for upcoming SGI UV platform releases.

6.2.2 Upgrading MySQL to Version 5.5

Replacing an unmaintained version of MySQL.

SLE 11 SP3 introduces the upgrade of the MySQL database to version 5.5. This upgrade involves a change of the database format and the database needs to be converted before MySQL can run again. Therefore MySQL is not running directly after the upgrade.

To migrate the MySQL database, run the following commands as root:

touch /var/lib/mysql/.force_upgrade
rcmysql restart

To verify failures during the server start check the log files under /var/log/mysql/.

We strongly recommend to back up the database before migrating it (mostly /var/lib/mysql).

6.3 Desktop

  • GNOME 2.28

    GNOME was updated with SP2 and uses PulseAudio for sound.

  • KDE 4.3.5

    KDE was updated with SP2.

  • X.org 7.4

6.4 Security

6.4.1 OpenSCAP Tools and Libraries Added

OpenSCAP is a set of open source libraries providing a path for integration of SCAP (Security Content Automation Protocol). SCAP is a collection of standards managed by NIST with the goal of providing a standard language for the expression of Computer Network Defense related information. For more information about SCAP, see http://nvd.nist.gov (http://nvd.nist.gov).

6.4.2 OpenSSH Update

SLE 11 SP3 now comes with OpenSSH version 6.2.

This version in SP3 (unlike the one in SP2), ignores .ssh/authorized_keys2 for new installations and allows using custom AuthorizedKeys file names through the sshd option AuthorizedKeysFile as described in the sshd_config manual page.

6.4.3 cms feature in openssl

The cms command handles Secure or Multipurpose Internet Mail Extensions v3.1 mail. It can encrypt, decrypt, sign and verify, compress and uncompress Secure or Multipurpose Internet Mail Extensions (S/MIME) messages.

6.4.4 PAM Configuration

The common PAM configuration files (/etc/pam.d/common-*) are now created and managed with pam-config.

6.4.5 SELinux Enablement

In addition to AppArmor, SELinux capabilities have been added to SUSE Linux Enterprise Server. While these capabilities are not enabled by default, customers can run SELinux with SUSE Linux Enterprise Server if they choose to.

What does SELinux enablement mean?

  • The kernel ships with SELinux support.

  • We will apply SELinux patches to all “common” userland packages.

  • The libraries required for SELinux (libselinux, libsepol, libsemanage, etc.) have been added to openSUSE and SUSE Linux Enterprise.

  • Quality Assurance is performed with SELinux disabled—to make sure that SELinux patches do not break the default delivery and the majority of packages.

  • The SELinux specific tools are shipped as part of the default distribution delivery.

  • Arbitrary SELinux policies running on SLES are not supported, though, and we will not be shipping any SELinux policies in the distribution. Reference and minimal policies may be available from the repositories at some future point.

  • Customers and Partners who have an interest in using SELinux in their solutions, are encouraged to contact SUSE to evaluate the level of support that is needed, and how support and services for the specific SELinux policies will be granted.

By enabling SELinux in our codebase, we add community code to offer customers the option to use SELinux without replacing significant parts of the distribution.

6.4.6 Enablement for TPM/Trusted Computing

SUSE Linux Enterprise Server 11 comes with support for Trusted Computing technology. To enable your system's TPM chip, make sure that the "security chip" option in your BIOS is selected. TPM support is entirely passive, meaning that measurements are being performed, but no action is taken based on any TPM-related activity. TPM chips manufactured by Infineon, NSC and Atmel are supported, in addition to the virtual TPM device for Xen.

The corresponding kernel drivers are not loaded automatically. To do so, enter:

find /lib/modules -type f -name "tpm*.ko"

and load the kernel modules for your system manually or via MODULES_LOADED_ON_BOOT in /etc/sysconfig/kernel.

If your TPM chip with taken ownership is configured in Linux and available for use, you may read PCRs from /sys/devices/*/*/pcrs.

The tpm-tools package contains utilities to administer your TPM chip, and the trousers package provides tcsd—the daemon that allows userland programs to communicate with the TPM driver in the Linux kernel. tcsd can be enabled as a service for the runlevels of your choice.

To implement a trusted ("measured") boot path, use the package trustedgrub instead of the grub package as your bootloader. The trustedgrub bootloader does not display any graphical representation of a boot menu for informational reasons.

6.4.7 Linux File System Capabilities

Our kernel is compiled with support for Linux File System Capabilities. This is disabled by default. The feature can be enabled by adding file_caps=1 as kernel boot option.

6.5 Network

IPv6 Improvements

SUSE Linux Enterprise Server has successfully completed the USGv6 test program designated by NIST that provides a proof of compliance to IPv6 specifications outlined in current industry standards for common network products.

Being IPv6 Consortium Member and Contributor Novell/SUSE have worked successfully with University of New Hampshire InterOperability Laboratory (UNH-IOL) to verify compliance to IPv6 specifications. The UNH-IOL offers ISO/IEC 17025 accredited testing designed specifically for the USGv6 test program. The devices that have successfully completed the USGv6 testing at the UNH-IOL by December 2012 are SUSE Linux Enterprise Server 11 SP2. Testing for subsequent releases of SUSE Linux Enterprise Server is in progress, and current and future results will be listed at http://www.iol.unh.edu/services/testing/ipv6/usgv6tested.php?company=105&type=#eqplist.

SUSE Linux Enterprise Server can be installed in an IPv6 environment and run IPv6 applications. When installing via network, do not forget to boot with "ipv6=1" (accept v4 and v6) or "ipv6only=1" (only v6) on the kernel command line. For more information, see the Deployment Guide and also Section 14.6, “IPv6 Implementation and Compliance”.

10G Networking Capabilities

OFED 1.5

traceroute 1.2

Support for traceroute over TCP.

FCoE

FCoE is an implementation of the Fibre Channel over Ethernet working draft. Fibre Channel over Ethernet is the encapsulation of Fibre Channel frames in Ethernet packets. It allows users with a FCF (Fibre Channel over Ethernet Forwarder) to access their existing Fibre Channel storage using an Ethernet adapter. When leveraging DCB's PFC technology to provide a loss-less environment, FCoE can run SAN and LAN traffic over the same link.

Data Center Bridging (DCB)

Data Center Bridging (DCB) is a collection of Ethernet enhancements designed to allow network traffic with differing requirements (e.g., highly reliable, no drops vs. best effort vs. low latency) to operate and coexist on Ethernet. Current DCB features are:

  • Enhanced Transmission Selection (aka Priority Grouping) to provide a framework for assigning bandwidth guarantees to traffic classes.

  • Priority-based Flow Control (PFC) provides a flow control mechanism which can work independently for each 802.1p priority.

  • Congestion Notification provides a mechanism for end-to-end congestion control for protocols, which do not have built-in congestion management.

6.5.1 Linux Virtual Server Load Balancer (ipvs) Extends Support for IPv6

The LVS/ipvs load balancing code did not fully support RFC2460 and fragmented IPv6 packets which could lead to lost packets and interrupted connections when IPv6 traffic was fragmented.

The load balancer has been enhanced to fully support IPv6 fragmented extension headers and is now RFC2460 compliant.

6.5.2 Mapping Network Interface Names to Names Written on the Chassis (biosdevname)

This feature addresses the issue that eth0 does not map to em1 (as labeled on server chassis), when a server has multiple network adapters.

This issue is solved for Dell hardware, which has the corresponding BIOS support, by renaming onboard network interfaces to em[1234], which maps to Embedded NIC[1234] as labeled on server chassis. (em stands for ethernet-on-motherboard.)

The renaming will be done by using the biosdevname utility.

biosdevname is automatically installed and used if YaST2 detects hardware suitable to be used with biosdevname. biosdevname can be disabled during installation by using "biosdevname=0" on the kernel commandline. The usage of biosdevname can be enforced on every hardware with "biosdevname=1". If the BIOS has no support, no network interface names are renamed.

6.5.3 pNFS client support for scalable NFS access

The SUSE Linux Enterprise Server now supports the use of parallel NFS as a client. This allows access to NFS in a scalable way.

6.6 Resource Management

6.6.1 libseccomp

Seccomp filters are expressed as a Berkeley Packet Filter (BPF) program, which is not a well understood interface for most developers.

The libseccomp library provides an easy to use interface to the Linux Kernel's syscall filtering mechanism, seccomp. The libseccomp API allows an application to specify which syscalls, and optionally which syscall arguments, the application is allowed to execute, all of which are enforced by the Linux Kernel.

6.6.2 XEN: Support for PCI Pass-through Bind and Unbind in libvirt Xen Driver

virt-manager is now able to set up PCI pass-through for Xen without having to switch to the command line to free the PCI device before assigning it to the VM.

6.6.3 Btrfs Quota for Subvolumes

/var/log, /var/crash, or /var/cache btrfs root file system subvolumes may use up all available disk space during normal operation causing a system malfunction.

To aid preventing this situation SUSE Linux Enterprise now offers btrfs quota support. See the btrfs(8) manual page for more information.

6.6.4 LXC Requires Correct Network Configuration

LXC now comes with support for network gateway detection. This feature will prevent a container from starting, if the network configuration setup of the container is incorrect. For instance, you must make sure that the network address of the container is within the host ip range, if it was set up as brigded on host. You might need to specify the netmask of the container network address (using the syntax "lxc.network.ipv4 = X.Y.Z.T / cidr") if the netmask is not the network class default netmask).

When using DHCP to assign a container network address, ensure "lxc.network.ipv4 = 0.0.0.0" is used in your configuration template.

Previously a container would have been started but the network would not have been working properly. Now a container will refuse to start, and print an error message stating that the gateway could not be set up. For containers created before this update we recommend running rcnetwork restart to reestablish a container network connection.

Tip
Tip: LXC Maintenance Update

After installing LXC maintenance update, we recommend clearing the LXC SLES cache template (stored by default in /var/cache/lxc/sles/rootfs-*) to ensure changes in the SLES template are available in newly created containers.

For containers created before the update, we recommend to install the packages "supportconfig", "sysconfig", and "iputils" using zypper.

6.7 Systems Management

Improved Update Stack

SUSE Linux Enterprise Server 11 provides an improved update stack and the new command line tool zypper to manage the repositories and install or update packages.

Enhanced YaST Partitioner

Extended Built-in Management Infrastructure

SUSE Linux Enterprise Server provides CIM/WBEM enablement with the SFCB CIMOM.

The following CIM providers are available:

  • cmpi-pywbem-base

  • cmpi-pywbem-power-management (DSP1027)

  • cmpi-pywbem-software (DSP1023)

  • libvirt-cim (DSP1041, DSP1043, DSP1045, DSP1057, DSP1059, DSP1076, DSP1081)

  • sblim-cmpi-base

  • sblim-cmpi-dhcp

  • sblim-cmpi-ethport_profile (DSP1014)

  • sblim-cmpi-fsvol

  • sblim-cmpi-network

  • sblim-cmpi-nfsv3

  • sblim-cmpi-nfsv4

  • sblim-cmpi-sysfs

  • sblim-gather-provider

  • smis-providers

  • sblim-cmpi-dns

  • sblim-cmpi-samba

  • sblim-cmpi-smbios

Support for Web Services for Management (WS Management)

The WS-Management protocol is supported via Openwsman, providing client (package: openwsman-client) and server (package: openwsman-server) implementations.

This allows for interoperable management with the Windows 'winrm' stack.

WebYaST — Web-Based Remote Management

WebYaST is an easy to use, web-based administration tool targeted at casual Linux administrators.

WebYaST is an add-on product. To deploy it, download the WebYaST media from http://download.novell.com (strings search or direct link: https://download.suse.com/Download?buildid=uVIzILaPtzg~) and install the add-on product e.g., via the YaST add-on module. After installation, follow these steps:

  • Open firewall port (note port number change!):

    SuSEfirewall2 open EXT TCP 4984    
    SuSEfirewall2 restart
  • Start services:

    rccollectd start
    rcwebyast start

The last command will display the URL to connect to with a Web browser.

For information about migrating to SP3, see Section 10.4.9, “Migrating SUSE Linux Enterprise Server 11 SP2 with WebYaST Installed via wagon”.

6.8 Other

EVMS2 Replaced with LVM2

Default File System

With SUSE Linux Enterprise Server 11, the default file system in new installations has been changed from ReiserFS to ext3. A public statement can be found at http://www.suse.com/products/server/technical-information/#FileSystem .

UEFI Enablement on AMD64/Intel64

SWAP over NFS

Linux Foundation's Carrier Grade Linux (CGL)

SUSE supports the Linux Foundation's Carrier Grade Linux (CGL) specification. SUSE Linux Enterprise 11 meets the latest CGL 4.0 standard, and is CGL registered. For more information, see http://www.suse.com/products/server/cgl/.

Hot-Add Memory and CPU with vSphere 4.1 or Newer

Hot-add memory and CPU is supported and tested for both 32-bit and 64-bit systems when running vSphere 4.1 or newer. For more information, see the VMware Compatibility Guide at http://www.vmware.com/resources/compatibility/search.php?deviceCategory=software&partner=465&virtualHardware=23.

7 Driver Updates

7.1 X.Org: fbdev Used in UEFI Secure Boot Mode (ASpeed Chipset)

The unaccelerated fbdev driver is used as a fallback in UEFI secure boot mode with the AST KMS driver, EFI VGA, and other currently unknown framebuffer drivers.

7.2 X.Org Driver Used in UEFI Secure Boot Mode (Matrox)

The unaccelerated "mgag200"/"modesetting" (generic X.Org) driver combo is used instead of the "mga" X.Org driver if machine is running in UEFI secure boot mode. The driver does not load in other cases and a warning message is written into the kernel log.

7.3 Support for Intel 2nd Generation of Atom Microserver

This Service Pack adds support for Intel's 2nd Generation Microserver solution based on the Intel® AtomTM Processor C2xxx Series Product Family codename Avoton SoC (System On Chip).

Please note, that the Network Adapter for the 2nd Generation Microserver will require a driver update (igb), that SUSE will provide later in the year.

7.4 Network Drivers

  • Updated bnx driver to version 2.0.4

  • Updated bnx2x driver to version 1.52.1-7

  • Updated e100 driver to version 3.5.24-k2

  • Updated tg3 driver to version 3.106

  • Added bna driver for Brocade 10Gbit LAN card in version 2.1.2.1

  • Updated bfa driver to version 2.1.2.1

  • Updated qla3xxx driver to version 2.03.00-k5

  • Updated sky2 driver to version 1.25

7.4.1 Add Support for TIPC (Transparent Inter-Process Communication)

The Transparent Inter-Process Communication protocol (TIPC) allows applications in a cluster environment to communicate quickly and reliably with each other, regardless of their location within the cluster. TIPC includes a network topology service that lets applications track both functional and physical changes in the network, helping to synchronize startup of distributed applications and their responses to failure conditions. A socket API is used to interact with the topology service and other applications. Address assignment and bearer configuration is managed from a userspace application called tipc-config. More information about TIPC can be found at <a href="http://tipc.sourceforge.net/"

7.4.2 Emulex be2net Driver

Updating Ethernet Firmware

The Emulex Ethernet driver supports updating the firmware image in the UCNA flash through the request_firmware interface in Linux. You can perform this update when the UCNA is online and passing network/storage traffic.

To update the ethernet firmware image:

Copy the latest firmware image under the /lib/firmware directory:

cp be3flash.ufi /lib/firmware

Start the update process:

ethtool -f eth<X> be3flash.ufi 0

Firmware Update Needed for SR-IOV Functionality

Emulex 10Gb E Virtual Fabric adapter firmware version 4.6.166.7 or later is required for SR-IOV functionality.

Limitation in Bridging on Emulex 10Gb E Virtual Fabric Adapter with SR-IOV Enabled

PING is not working when attempting to bridge the ports of the Emulex 10Gb E Virtual Fabric adapter to the virtual machines when SR-IOV is enabled in the BIOS. This issue occurs due to limitations of the virtual Ethernet bridge in the adapter. Please reference the Emulex Release Notes for further information before enabling SR-IOV

7.4.3 Intel ixgbe driver update

Support for the follow devices have been added to the SLES11 SP3 ixgbe driver:

  • Intel(R) 82599 10 Gigabit Network Connection

  • Intel(R) Ethernet Converged Network Adapter X520-4

  • Intel® Ethernet Controller X540-AT1

Note this includes the following PCI Device ID's: 0x1557, 0x154A and 0x1560

The driver also includes the latest bug fixes to the driver available at the time of the driver update cutoff date.

7.4.4 Intel igb driver update

Support for the follow devices have been added to the SLES11 SP3 igb driver:

  • Intel® Ethernet Server Adapter I210-T1

  • Intel® I210 Gigabit Fiber Network Connection

  • Intel® I210 Gigabit Backplane Connection

  • Intel® I210 Gigabit Network Connection

  • Intel® I211 Gigabit Network Connection

Note this includes the following PCI Device ID's: 0x1533, 0x1534, 0x1536, 0x1537, 0x1538 and 0x1539

The driver also includes the latest bug fixes to the driver available at the time of the driver update cutoff date.

7.4.5 Intel e1000e driver update

Support for the follow devices have been added to the SLES11 SP3 e1000e driver:

  • Intel® Ethernet Connection I217-LM

  • Intel® Ethernet Connection I217-V

  • Intel® Ethernet Connection I218-LM

  • Intel® Ethernet Connection I218-V

Note this includes the following PCI Device ID's: 0x153A, 0x153B, 0x155A and 0x1559

The driver also includes the latest bug fixes to the driver available at the time of the driver update cutoff date.

7.4.6 Updating Firmware for QLogic 82XX based CNA

For QLogic 82XX based CNA, update the firmware to the latest from the QLogic website or whatever is recommended by the OEM in case you are running 4.7.x FW version.

7.5 Storage Drivers

  • Updated qla2xxx to version 8.04.00.13.11.3-k

  • Updated qla4xxx to version v5.03.00.06.11.3-k0

  • Updated megaraid_mbox driver to version 2.20.5.1

  • Updated megaraid_sas to version 4.27

  • Updated MPT Fusion to version 4.22.00.00

  • Updated mpt2sas driver to version 04.100.01.02

  • Updated lpfc driver to version 8.3.5.7

  • Added bnx2i driver for Broadcom NetXtreme II in version 2.1.1

  • Updated bfa driver to version 2.1.2.1

  • The enic driver was updated to version 1.4.2 to support newer Cisco UCS systems. This update also replaces LRO (Large Receive Offload) to GRO (Generic Receive Offload).

7.5.1 LIO Based FC Targets

The LIO target stack has been updated to support FC targets based on QLogic adapters.

7.5.2 PSM Library for the Intel OFED Driver Functionality

PSM is Performance Scaled Messaging and is required to enable the Intel IB adapter. The best messaging rates, latency, and bandwidth are obtained by using PSM with the Intel IB adapter.

To implement the Intel OFED solution these are the steps which need to be followed to install OFED and MPIs:

  1. Install SLES 11 SP3 with InfiniBand and Scientific support. Here InfiniBand support will install OFED stack which includes a driver for Intel HCAs. Scientific support will install all the MPIs and related tests.

  2. Verify the following package versions or greater are installed: infinipath-psm-3.1, mpitests-3.2, mpitests-mvapich2-3.2, mpitests-mvapich2-psm-3.2, mpitests-openmpi-3.2, mvapich2-1.5.1p1, mvapich2-psm-1.5.1p1, openmpi-1.4.4

  3. Install the libipathverbs package. This is part of installation source but does not get installed in first step.

  4. Reboot.

  5. Load the following modules: ib_uverbs, ib_umad, and ib_ucm.

Verification:

ibv_devinfo will show the IB ports up.

7.5.3 LIO Based FC Targets

The LIO target stack has been updated to support FC targets based on QLogic adapters.

7.5.4 SATA Device Sleep

Device Sleep is a feature as described in AHCI 1.3.1 Technical Proposal. This feature enables an HBA and SATA storage device to enter the DevSleep interface state, enabling lower power SATA-based systems.

7.5.5 Update of Intel SCU Driver

The Intel isci driver got updated to the latest upstream status.

7.5.6 iSCSI Update

The Linux-iSCSI.org (LIO) kernel subsystem and related utilities have been updated to the kernel v3.4 level.

7.5.7 Support for Micron Technology P320 PCIe SSD

This Service Pack adds support for Micron Technology P320 PCIe SSD solutions through the mtip32xx block driver.

7.5.8 Brocade FCoE Switch Does Not Accept Fabric Logins from Initiator

  1. Once link is up, LLDP query QoS to get the new PFC, send FCoE incapable right away, which is right.

  2. After negotiating with neighbor, we got lldp frame with un-recognized ieee dcbx, so we declare link is CEE incapable, and send out FCoE Capable event with PFC = 0 to fcoe kernel.

  3. Then neighbor adjusts its version to match our CEE version, now we find right DCBX tlv in incoming LLDP frame, we declare link CEE capable. At this time we did not send FCoE capable again since we already sent it in step 2.

To solve this, upgrade the switch firmware to v6.4.3 or above.

7.6 Other Drivers

  • Updated CIFS to version 1.74

  • Updated intel-i810 driver

  • Added X11 driver for AMD Geode LX 2D (xorg-x11-driver-video-amd)

  • Updated X11 driver for Radeon cards

  • Updated XFS and DMAPI driver

  • Updated Wacom driver to version 1.46

7.6.1 'Click-on-touch' and 'click-on-relase' for xf86-input-evdev Input Driver

Kiosk style systems (cash registers, vending machines, etc.) with touchscreens need to make sure, touchscreen 'clicks' are delivered accurately at the point where the touch screen was touched initially ie. where a widget element was 'pressed'. Virtually all UI toolkits wait for a button relase to happen to accept a 'click' and to record the location to determine the widget element 'clicked'. This behavior is undesirable for kiosk style systems. At the same time kiosk style systems use a fixed window placement: the user should not be able to 'drag' windows around.

To remedy this the following approaches would be possible:

  • Use of a special UI toolkits developed for this purpose. However this would limit the choice of toolkits.

  • Use of touchscreens which syntesize a 'button release' event right after a 'button press' regardless if the finger is still touching the screen or not. Such touch screens exist.

  • A customized input driver which is able to emulate this feature: This provides the greatest flexibility as it will work with all touch screens and toolkits.

For this the 'klick-on-touch' and 'klick-on-release' feature has been added to the updated evdev input driver (xf86-input-evdev). The feature can be enabled either statically using a config file or 'on-the'fly' at run time by changing the approriate properties on the input device. Please refer to the evdev man page (man 4 evdev). Device properties can be set by an X application (for instance a cash register software) or generically using the tool 'xinput' (refer to 'man 1 xinput').

The following options can be set:

1. mode:
  0 - default press/release behavior, 
  1 - click-on-touch: button release is syntesized immediately after a
      'button press', 
  2 - click-on-release: the button press event is delayed until the finger
      is lifted off the screen and a release event is generated.
2. button:
  the button to which this behavior is applied (since touchscreens generate a button 1
  event for screen touches, this usually doesn't need to be modified).
3. delay:
  To allow for visual feedback, a small delay may be introduced between press
  and release. Finger movements during the delay period do not generate position
  events nor are they reflected in the position of the release event. The delay time
  is set in miliseconds. A value of 100 (.1 sec) is sufficient to obtain a visible visual
  feedback.

7.6.2 SaX2: Changing Video Resolution

With a SLE 11 SP3 maintenance update resp. the general update to SLE 11 SP4, SaX2 no longer lets you select a video resolution when KMS is active. With KMS and the native or the modesetting driver RandR > 1.1 is available, which lets you change the resolution on the fly. The Gnome desktop provides a tool to do this and save the settings persistently across sessions.

For any UMS (and RandR 1.1) drivers you will still get the full list of video modes. If you select an unsupported mode, it will be ignored and a monitor preferred default mode will be used instead.

7.6.3 SUSE Linux Enterprise Virtual Machine Driver Pack 2.1

SUSE Linux Enterprise Virtual Machine Driver Pack contains disk and network device drivers for Microsoft Windows Operating Systems that enable the high performance hosting of the unmodified guests on top of SUSE Linux Enterprise Server. Supported host operating systems are SUSE Linux Enterprise Server 10 SP4 and SUSE Linux Enterprise Server 11 SP2 or later. SUSE Linux Enterprise Virtual Machine Driver Pack contains paravirtualized drivers for both KVM and Xen hypervisors, available as part of our SUSE Linux Enterprise Server product. The key update in version 2.1 is a support for SUSE Linux Enterprise Server 11 SP3 and new Microsoft Windows releases: Microsoft Windows Server 2012 and Microsoft Windows 8.

7.6.4 New Intel Platform and CPU Support

This SP adds support for the following new Intel CPUs:

  • 4th Generation Intel® Core™ Processor

  • Intel® Xeon® processor E5-2600 v2 product family

  • Intel® Xeon® processor E5-1600 v2 product family

  • Intel® Xeon® processor E5-2400 v2 product family

  • Intel® Xeon® processor E5-4600 v2 product family

  • Next generation Intel® Xeon® processor E7-8800/4800/2800 v2 product families (codenamed ‘Ivy Bridge-EX’)

This covers new support for the following platforms:

  • Brickland-EX

  • Romley

  • Next generation

7.6.5 Recent SSDs and SAS/SATA Interfaces in SMARTmon

Spinning disks and SSDs evolve, exposing new parameters or refurbished SMART Attributes Data Structure.

To match vendor specific data structures and avoid missleading interpretation, SMARTmon tools include the new data structures.

7.6.6 Updated Support for Intel Integrated Graphics

This Service Pack adds support for the 4th Generation Intel® Core™ Processor integrated graphics core.

7.6.7 Support for Intel Microserver

This Service Pack adds support for Intel's Microserver solution based on the Intel® AtomTM Processor S1200 Product Family codename Centerton SoC (System On Chip).

8 Other Updates

8.1 Update of PostgreSQL to Version 9.4

The upstream end-of-life for version 9.1 is announced for September 2016. Customers need to switch to a newer supported version until then.

PostgreSQL was updated to version 9.4, prolonging the timeframe during which PostgreSQL is supported. Thus there is enough time for switching.

8.2 Updating to Firefox 24 ESR

Firefox was updated to version 24 ESR.

This update also brings updates of Mozilla NSPR and Mozilla NSS libraries. Mozilla NSS libraries contain cryptographic enhancements, including TLS 1.2 support.

It comes with PDF.js, which now replaces the Acroread PDF plugin.

8.3 Package python-ethtool

The Python bindings for ethtool were updated in SLE 11 SP3 to version 0.7. This update introduced several stability bugfixes and support for handling IPv6.

8.4 Update Python to 2.6.8

Python 2.6.7 and 2.6.8 are security only updates to 2.6.6.

Python 2.6 helps with migrating to Python 3.0, which is a major redesign of the language. Whenever possible, Python 2.6 incorporates new features and syntax changes from 3.0 while remaining compatible with existing code. In case of conflict, Python 2.6 adds compatibility functions in a future_builtins module and a -3 switch to warn about usages that will become unsupported in 3.0.

Some significant new packages have been added to the standard library, such as the multiprocessing and json modules.

8.5 List of Updated Packages

  • Updated acct to version 6.5.5

  • Updated ant to version 1.7.1

  • Updated augeas to version 0.9.0

  • Updated autoyast2 to version 2.17.68

  • Updated binutils to version 2.23.1

  • Updated biosdevname to version 0.4.1

  • Updated bluez to version 4.99

  • Updated brocade-firmware to version 3.1.2.1

  • Updated btrfsprogs to version 0.20

  • Updated checkmedia to version 3.0

  • Updated dapl to version 2.0.34

  • Updated device-mapper to version 1.02.77

  • Updated fuse to version 2.8.7

  • Updated gdb to version 7.5.1

  • Updated gnu-efi to version 3.0q

  • Updated hwinfo to version 15.50

  • Updated hyper-v to version 5

  • Updated i4l-base to version 2013.5.13

  • Updated ibutils to version 1.5.7

  • Updated infiniband-diags to version 1.5.13

  • Updated installation-images to version 11.194

  • Updated ipmitool to version 1.8.12

  • Updated iprutils to version 2.3.13

  • Updated irqbalance to version 1.0.4

  • Updated IBM Java 1.6.0 (java-1_6_0-ibm) to SR13 FP1

  • Updated IBM Java 1.7.0 (java-1_6_0-ibm) to SR4 FP1

  • Updated kdump to version 0.8.4

  • Updated kernel-source to version 3.0.76

  • Updated kexec-tools to version 2.0.3

  • Updated kvm to version 1.4.1

  • Updated ledmon to version 0.76

  • Updated libdfp to version 1.0.8

  • Updated libdrm to version 2.4.41

  • Updated libHBAAPI2 to version 2.2.7

  • Updated libhbalinux2 to version 1.0.14

  • Updated libhugetlbfs to version 2.15

  • Updated libibmad to version 1.3.9

  • Updated libibumad to version 1.3.7

  • Updated libmlx4-rdmav2 to version 1.0.1

  • Updated libnes-rdmav2 to version 1.1.3

  • Updated librdmacm to version 1.0.15

  • Updated libsdp to version 1.1.108

  • Updated libservicelog to version 1.1.12

  • Updated libvirt to version 1.0.5.1

  • Updated libvpd2 to version 2.1.1 (and on ppc64 to version 2.2.0)

  • Updated libzypp to version 9.34.0

  • Updated linuxrc to version 3.3.91

  • Updated lldpad to version 0.9.45

  • Updated lsvpd to version 1.7.0

  • Updated lvm2 to version 2.02.98

  • Updated lxc to version 0.8.0

  • Updated makedumpfile to version 1.5.1

  • Updated mcelog to version 1.0.2013.01.18

  • Updated mdadm to version 3.2.6

  • Updated Mesa to version 9.0.3

  • Updated Mozilla Firefox to version 24 ESR

  • Updated mysql to version 5.5.31

  • Updated nagios-plugins to version 1.4.16

  • Updated nfsidmap to version 0.25

  • Updated nss-shared-helper to version 1.0.10

  • Updated ofed to version 1.5.4.1

  • Updated openCryptoki to version 2.4.2

  • Updated open-fcoe to version 1.0.26

  • Updated opensm to version 3.3.13

  • Updated openssh to version 6.2p2

  • Updated powerpc-utils to version 1.2.16

  • Updated pciutils-ids to version 2013.2.11

  • Updated perf to version 3.0.76

  • Updated perl-Bootloader to version 0.4.89.55

  • Updated php53 to version 5.3.17

  • Updated pixman to version 0.24.4

  • Updated postfix to version 2.9.4

  • Updated ppc64-diag to version 2.6.1

  • Updated python-lxml to version 2.3.6

  • Updated release-notes-sles to version 11.3.20

  • Updated rng-tools to version 4

  • Updated rsyslog to version 5.10.1

  • Updated servicelog to version 1.1.11

  • Updated sg3_utils to version 1.35

  • Updated sles-admin_de to version 11.3

  • Updated sles-admin_ja to version 11.3

  • Updated sles-admin_zh_CN to version 11.3

  • Updated sles-admin_zh_TW to version 11.3

  • Updated sles-deployment_de to version 11.3

  • Updated sles-deployment_ja to version 11.3

  • Updated sles-deployment_ko to version 11.3

  • Updated sles-deployment_zh_CN to version 11.3

  • Updated sles-deployment_zh_TW to version 11.3

  • Updated sles-installquick_ar to version 11.3

  • Updated sles-installquick_cs to version 12.3

  • Updated sles-installquick_de to version 12.3

  • Updated sles-installquick_es to version 12.3

  • Updated sles-installquick_fr to version 12.3

  • Updated sles-installquick_hu to version 12.3

  • Updated sles-installquick_it to version 12.3

  • Updated sles-installquick_ja to version 12.3

  • Updated sles-installquick_ko to version 12.3

  • Updated sles-installquick_pl to version 12.3

  • Updated sles-installquick_pt_BR to version 12.3

  • Updated sles-installquick_ru to version 12.3

  • Updated sles-installquick_zh_CN to version 12.3

  • Updated sles-installquick_zh_TW to version 12.3

  • Updated sles-manuals_en to version 11.3

  • Updated smartmontools to version 6.0

  • Updated snapper to version 0.1.2

  • Updated sssd to version 1.9.4

  • Updated stunnel to version 4.54

  • Updated sysconfig to version 0.71.61

  • Updated tpm-tools to version 1.3.8

  • Updated trousers to version 0.3.10

  • Updated virt-manager to version 0.9.4

  • Updated virt-utils to version 1.2.1

  • Updated virt-viewer to version 0.5.4

  • Updated vm-install to version 0.6.23

  • Updated x3270 to version 3.3.12

  • Updated xen to version 4.2.2_04

  • Updated yast2 to version 2.17.129

  • Updated yast2-add-on to version 2.17.31

  • Updated yast2-apparmor to version 2.17.14

  • Updated yast2-bootloader to version 2.17.96

  • Updated yast2-country to version 2.17.54

  • Updated yast2-dirinstall to version 2.17.5

  • Updated yast2-fcoe-client to version 2.17.25

  • Updated yast2-firewall to version 2.17.13

  • Updated yast2-firstboot to version 2.17.18

  • Updated yast2-installation to version 2.17.108

  • Updated yast2-iscsi-client to version 2.17.36

  • Updated yast2-kdump to version 2.17.26

  • Updated yast2-kerberos-client to version 2.17.16

  • Updated yast2-ldap to version 2.17.8

  • Updated yast2-ldap-client to version 2.17.37

  • Updated yast2-mail to version 2.17.7

  • Updated yast2-ncurses to version 2.17.22

  • Updated yast2-network to version 2.17.195

  • Updated yast2-nfs-client to version 2.17.17

  • Updated yast2-nis-server to version 2.17.3

  • Updated yast2-ntp-client to version 2.17.15

  • Updated yast2-online-update to version 2.17.23

  • Updated yast2-packager to version 2.17.107

  • Updated yast2-pkg-bindings to version 2.17.59

  • Updated yast2-registration to version 2.17.38

  • Updated yast2-repair to version 2.17.12

  • Updated yast2-samba-client to version 2.17.27

  • Updated yast2-samba-server to version 2.17.15

  • Updated yast2-security to version 2.17.16

  • Updated yast2-slp to version 2.17.0

  • Updated yast2-snapper to version 2.17.19

  • Updated yast2-squid to version 2.17.12

  • Updated yast2-storage to version 2.17.142

  • Updated yast2-trans-ar to version 2.17.37

  • Updated yast2-trans-cs to version 2.17.46

  • Updated yast2-trans-da to version 2.17.35

  • Updated yast2-trans-de to version 2.17.55

  • Updated yast2-trans-el to version 2.17.18

  • Updated yast2-trans-en_GB to version 2.17.29

  • Updated yast2-trans-en_US to version 2.17.34

  • Updated yast2-trans-es to version 2.17.53

  • Updated yast2-trans-fi to version 2.17.38

  • Updated yast2-trans-fr to version 2.17.55

  • Updated yast2-trans-hu to version 2.17.56

  • Updated yast2-trans-it to version 2.17.56

  • Updated yast2-trans-ja to version 2.17.47

  • Updated yast2-trans-ko to version 2.17.53

  • Updated yast2-trans-nb to version 2.17.32

  • Updated yast2-trans-nl to version 2.17.52

  • Updated yast2-trans-pl to version 2.17.49

  • Updated yast2-trans-pt to version 2.17.10

  • Updated yast2-trans-pt_BR to version 2.17.52

  • Updated yast2-trans-ru to version 2.17.47

  • Updated yast2-trans-sv to version 2.17.37

  • Updated yast2-trans-tr to version 2.17.16

  • Updated yast2-trans-uk to version 2.17.28

  • Updated yast2-trans-zh_CN to version 2.17.41

  • Updated yast2-trans-zh_TW to version 2.17.36

  • Updated yast2-update to version 2.17.24

  • Updated yast2-users to version 2.17.54

  • Updated yast2-vm to version 2.17.16

  • Updated yast2-wagon to version 2.17.38

  • Updated yast2-x11 to version 2.17.17

  • Updated zlib to version 1.2.7

  • Updated zypper to version 1.6.307

9 Software Development Kit

SUSE provides a Software Development Kit (SDK) for SUSE Linux Enterprise 11 Service Pack 3. This SDK contains libraries, development environments and tools along the following patterns:

  • C/C++ Development

  • Certification

  • Documentation Tools

  • GNOME Development

  • Java Development

  • KDE Development

  • Linux Kernel Development

  • Programming Libraries

  • .NET Development

  • Miscellaneous

  • Perl Development

  • Python Development

  • Qt 4 Development

  • Ruby on Rails Development

  • Ruby Development

  • Version Control Systems

  • Web Development

  • YaST Development

9.1 Optional GCC Compiler Suite on SDK

The optional compiler on the SDK has been updated to GCC 4.7.2. It brings better standard compliance (ISO C 11, ISO C++ 11), improved optimizations and allows to take benefit of new hardware instructions.

For details see http://gcc.gnu.org/gcc-4.7/changes.html

SUSE also added support for the IBM zEnterprise EC12 architecture (options -march=zEC12 and -mtune=zEC12) and for AMD's new low power core processor (Family 16h) (options-march=btver2 and -mtune=btver2).

10 Update-Related Notes

This section includes update-related information for this release.

10.1 General Notes

10.1.1 Upgrading PostgreSQL Installations from 8.3 to 9.1

To upgrade a PostgreSQL server installation from version 8.3 to 9.1, the database files need to be converted to the new version.

Newer versions of PostgreSQL come with the pg_upgrade tool that simplifies and speeds up the migration of a PostgreSQL installation to a new version. Formerly dump and restore was needed that was much slower.

pg_upgrade needs to have the server binaries of both versions available. To allow this, we had to change the way PostgreSQL is packaged as well as the naming of the packages, so that two or more versions of PostgreSQL can be installed in parallel.

Starting with version 9.1, PostgreSQL package names contain numbers indicating the major version. In PostgreSQL terms the major version consists of the first two components of the version number, i.e. 8.3, 8.4, 9.0, or 9.1. So, the packages for Postgresql 9.1 are named postgresql91, postgresql91-server, etc. Inside the packages the files were moved from their standard locations to a versioned location such as /usr/lib/postgresql83/bin or /usr/lib/postgresql91/bin to avoid file conflicts if packages are installed in parallel. The update-alternatives mechanism creates and maintains symbolic links that cause one version (by default the highest installed version) to re-appear in the standard locations. By default, database data are stored under /var/lib/pgsql/data on SUSE Linux.

The following preconditions have to be fulfilled before data migration can be started:

  1. If not already done, the packages of the old PostgreSQL version must be upgraded to the new packaging scheme through a maintenance update. For SLE 11 this means to install the patch that upgrades PostgreSQL from version 8.3.14 to 8.3.19 or higher.

  2. The packages of the new PostgreSQL major version need to be installed. For SLE11 this means to install postgresql91-server and all the packages it depends on. As pg_upgrade is contained in postgresql91-contrib, that one has to be installed as well, at least until the migration is done.

  3. Unless pg_upgrade is used in link mode, the server must have enough free disk space to temporarily hold a copy of the database files. If the database instance was installed in the default location, the needed space in megabytes can be determined by running the follwing command as root: "du -hs /var/lib/pgsql/data". If space is tight, it might help to run the "VACUUM FULL" SQL command on each database in the instance to be migrated, but be aware that it might take very long.

Upstream documentation about pg_upgrade including step by step instructions for performing a database migration can be found under file:///usr/share/doc/packages/postgresql91/html/pgupgrade.html (if the postgresql91-docs package is installed), or online under http://www.postgresql.org/docs/9.1/static/pgupgrade.html. NOTE: The online documentation starts with explaining how you can install PostgreSQL from the upstream sources (which is not necessary on SLE) and also uses other directory names (/usr/local instead of the update-alternatives based path as described above).

For background information about the inner workings of pg_admin and a performance comparison with the old dump and restore method, see http://momjian.us/main/writings/pgsql/pg_upgrade.pdf.

10.1.2 Lower Version Numbers in SUSE Linux Enterprise 11 SP3 than in SP2

When upgrading from SUSE Linux Enterprise Server or Desktop 11 SP2 to SP3, you may encounter a version downgrade of specific software packages, including the Linux Kernel.

SLE 11 SP3 has all its software packages and updates in the SP3 repositories. No packages from SP2 repositories are needed for installation or upgrade, not even from the SP2 update repositories.

Note
Note

It is important to remember that the version number is not sufficient to determine which bugfixes are applied to a software package.

In case you add SP2 update repositories, be aware of one characteristic of the repository concept: Version numbers in the SP2 update repository can be higher than those in the SP3 repository. Thus, if you update with the SP2 repositories enabled, you may get the SP2 version of a package instead of the SP3 version. This is admittedly unfortunate.

It is recommended to avoid using the version from a lower SP, because using the SP2 package instead of the SP3 package can result in unexpected side effects. Thus we advise to switch off all the SP2 repositories, if you do not really need them. Keep old repositories only, if your system depends on a specific older package version. If you need a package from a lower SP though, and thus have SP2 repositories enabled, make sure that the packages you intended to upgrade have actually been upgraded.

Summarizing: If you have an SP2 installation with all patches and updates applied, and then migrate off-line to SP3 GA, you will see a downgrade of some packages. This is expected behavior.

10.1.3 Upgrading from SLES 10 (GA and Service Packs) or SLES 11 GA

There are supported ways to upgrade from SLES 10 GA and SPx or SLES 11 GA and SP1 to SLES 11 SP3, which may require intermediate upgrade steps:

  • SLES 10 GA -> SLES 10 SP1 -> SLES 10 SP2 -> SLES 10 SP3 -> SLES 10 SP4 -> SLES 11 SP3, or

  • SLES 11 GA -> SLES 11 SP1 -> SLES 11 SP2 -> SLES 11 SP3

10.1.4 Online Migration from SP2 to SP3 via "YaST wagon"

The online migration from SP2 to SP3 is supported via the "YaST wagon" module.

10.1.5 Migrating to SLE 11 SP3 Using Zypper

To migrate the system to the Service Pack 3 level with zypper, proceed as follows:

  • Open a root shell.

  • Run zypper ref -s to refresh all services and repositories.

  • Run zypper patch to install package management updates.

  • Now it is possible to install all available updates for SLES/SLED 11 SP2; run zypper patch again.

  • Now the installed products contain information about distribution upgrades and which migration products should be installed to perform the migration. Read the migration product information from /etc/products.d/*.prod and install them.

  • Enter the following command:

    grep '<product' /etc/products.d/*.prod

    A sample output could be as follows:

    <product>sle-sdk-SP3-migration</product>
    <product>SUSE_SLES-SP3-migration</product>
  • Install these migration products (example):

    zypper in -t product sle-sdk-SP3-migration SUSE_SLES-SP3-migration
  • Run suse_register -d 2 -L /root/.suse_register.log to register the products in order to get the corresponding SP3 Update repositories.

  • Run zypper ref -s to refresh services and repositores.

  • Check the repositories using zypper lr. Disable SP1 and SP2 repositories after the registration and enable the new SP3 repositories (such as SP3-Pool, SP3-Updates):

    zypper mr --disable <repo-alias>
    zypper mr --enable <repo-alias>

    Also disable repositories you do not want to update from.

  • Then perform a distribution upgrade by entering the following command :

    zypper dup --from SLES11-SP3-Pool --from SLES11-SP3-Updates \
      --from SLE11-SP2-WebYaST-1.3-Pool --from SLE11-SP2-WebYaST-1.3-Updates

    Add more SP3 repositories here if needed, e.g. in case add-on products are installed. For WebYaST, it is actually SLE11-SP2-*, because there is one WebYaST release that runs on two SP code bases.

    Note
    Note

    If you make sure that only repositories, which you migrate from, are enabled, you can omit the --from parameters.

  • zypper will report that it will delete the migration product and update the main products. Confirm the message to continue updating the RPM packages.

  • To do a full update, run zypper patch.

  • After the upgrade is finished, register the new products again:

    suse_register -d 2 -L /root/.suse_register.log
  • Run zypper patch after re-registering. Some products donot use the update repositories during the migration and they are not active at this point of time.

  • Reboot the system.

10.1.6 Migration from SUSE Linux Enterprise Server 10 SP4 via Bootable Media

Migration is supported from SUSE Linux Enterprise Server 10 SP4 via bootable media (incl. PXE boot).

10.1.7 Migrating Hosts Running SMT 11 SP2 to SMT 11 SP3

As part of the release of the SLE 11 SP3 product family, SUSE will also release Susbscription Management Tool 11 SP3 (SMT 11 SP3). We expect to release SMT 11 SP3 within a month after the release of SLES 11 SP3.

Do not migrate hosts running SMT 11 SP2 to SLES 11 SP3 before SMT 11 SP3 is available.

You can update SLE 11 SP3 hosts via SMT 11 SP2 without any limitations until SMT 11 SP3 is released.

10.1.8 Online Migration with Debuginfo Packages Not Supported

Online migration from SP2 to SP3 is not supported if debuginfo packages are installed.

10.1.9 Upgrading to SLES 11 SP3 with Root File System on iSCSI

The upgrade or the automated migration from SLES 10 to SLES 11 SP3 may fail if the root file system of the machine is located on iSCSI because of missing boot options.

There are two approaches to solve it, if you are using AutoYaST (adjust IP addresses and hostnames according to your environment!):

With Manual Intervention:

Use as boot options:

withiscsi=1 autoupgrade=1 autoyast=http://myserver/autoupgrade.xml

Then, in the dialog of the iSCSI initiator, configure the iSCSI device.

After successful configuration of the iSCSI device, YaST will find the installed system for the upgrade.

Fully Automated Upgrade:

Add or modify the <iscsi-client> section in your autoupgrade.xml as follows:

<iscsi-client>
  <initiatorname>iqn.2012-01.com.example:initiator-example</initiatorname>
  <targets config:type="list">
    <listentry>
      <authmethod>None</authmethod>
      <iface>default</iface>
      <portal>10.10.42.84:3260</portal>
      <startup>onboot</startup>
      <target>iqn.2000-05.com.example:disk01-example</target>
    </listentry>
  </targets>
  <version>1.0</version>
</iscsi-client>

Then, run the automated upgrade with these boot options:

autoupgrade=1 autoyast=http://myserver/autoupgrade.xml

10.1.10 Kernel Split in Different Packages

With SUSE Linux Enterprise Server 11 the kernel RPMs are split in different parts:

  • kernel-flavor-base

    Very reduced hardware support, intended to be used in virtual machine images.

  • kernel-flavor

    Extends the base package; contains all supported kernel modules.

  • kernel-flavor-extra

    All other kernel modules which may be useful but are not supported. This package will not be installed by default.

10.1.11 Tickless Idle

SUSE Linux Enterprise Server uses tickless timers. This can be disabled by adding nohz=off as a boot option.

10.1.12 Development Packages

SUSE Linux Enterprise Server will no longer contain any development packages, with the exception of some core development packages necessary to compile kernel modules. Development packages are available in the SUSE Linux Enterprise Software Development Kit.

10.1.13 Displaying Manual Pages with the Same Name

The man command now asks which manual page the user wants to see if manual pages with the same name exist in different sections. The user is expected to type the section number to make this manual page visible.

If you want to revert back to the previously used method, please set MAN_POSIXLY_CORRECT=1 in a shell initialization file such as ~/.bashrc.

10.1.14 YaST LDAP Server No Longer Uses /etc/openldap/slapd.conf

The YaST LDAP Server module no longer stores the configuration of the LDAP Server in the file /etc/openldap/slapd.conf. It uses OpenLDAP's dynamic configuration backend, which stores the configuration in an LDAP database itself. That database consists of a set of .ldif files in the directory /etc/openldap/slapd.d. You should - usually - not need to access those files directly. To access the configuration you can either use the yast2-ldap-server module or any capable LDAP client (e.g., ldapmodify, ldapsearch, etc.). For details on the dynamic configuration of OpenLDAP, refer to the OpenLDAP Administration Guide.

10.1.15 AppArmor

This release of SUSE Linux Enterprise Server ships with AppArmor. The AppArmor intrusion prevention framework builds a firewall around your applications by limiting the access to files, directories, and POSIX capabilities to the minimum required for normal operation. AppArmor protection can be enabled via the AppArmor control panel, located in YaST under Security and Users. For detailed information about using AppArmor, see the documentation in /usr/share/doc/packages/apparmor-docs.

The AppArmor profiles included with SUSE Linux have been developed with our best efforts to reproduce how most users use their software. The profiles provided work unmodified for many users, but some users may find our profiles too restrictive for their environments.

If you discover that some of your applications do not function as you expected, you may need to use the AppArmor Update Profile Wizard in YaST (or use the aa-logprof(8) command line utility) to update your AppArmor profiles. Place all your profiles into learning mode with the following: aa-complain /etc/apparmor.d/*

When a program generates many complaints, the system's performance is degraded. To mitigate this, we recommend periodically running the Update Profile Wizard (or aa-logprof(8)) to update your profiles even if you choose to leave them in learning mode. This reduces the number of learning events logged to disk, which improves the performance of the system.

10.1.16 Updating with Alternative Boot Loader (Non-Linux) or Multiple Boot Loader Programs

Note
Note

Before updating, check the configuration of your boot loader to assure that it is not configured to modify any system areas (MBR, settings active partition or similar). This will reduce the amount of system areas that you need to restore after update.

Updating a system where an alternative boot loader (not grub) or an additional boot loader is installed in the MBR (Master Boot Record) might override the MBR and place grub as the primary boot loader into the system.

In this case, we recommend the following: First backup your data. Then either do a fresh installation and restore your data, or run the update nevertheless and restore the affected system areas (in particular, the MBR). It is always recommended to keep data separated from the system software. In other words, /home, /srv, and other volumes containing data should be on separate partitions, volume groups or logical volumes. The YaST partitioning module will propose doing this.

Other update strategies (except booting the install media) are safe if the boot loader is configured properly. But the other strategies are not available, if you update from SUSE Linux Enterprise Server 10.

10.1.17 Upgrading MySQL to SUSE Linux Enterprise Server 11

During the upgrade to SUSE Linux Enterprise Server 11 MySQL is also upgraded to the latest version. To complete this migration you may have to upgrade your data as described in the MySQL documentation.

10.1.18 Fine-Tuning Firewall Settings

SuSEfirewall2 is enabled by default, which means you cannot log in from remote systems. This also interferes with network browsing and multicast applications, such as SLP and Samba ("Network Neighborhood"). You can fine-tune the firewall settings using YaST.

10.1.19 Upgrading from SUSE Linux Enterprise Server 10 SP4 with the Xen Hypervisor May Have Incorrect Network Configuration

We have improved the network configuration: If you install SUSE Linux Enterprise Server 11 SP3 and configure Xen, you get a bridged setup through YaST.

However, if you upgrade from SUSE Linux Enterprise Server 10 SP4 to SUSE Linux Enterprise Server 11 SP3, the upgrade does not configure the bridged setup automatically.

To start the bridge proposal for networking, start the "YaST Control Center", choose "Virtualization", then "Install Hypervisor and Tools". Alternatively, call yast2 xen on the commandline.

10.1.20 LILO Configuration Via YaST or AutoYaST

The configuration of the LILO boot loader on the x86 and x86_64 architecture via YaST or AutoYaST is deprecated, and not supported anymore. For more information, see Novell TID 7003226 http://www.novell.com/support/documentLink.do?externalID=7003226.

10.2 Update from SUSE Linux Enterprise Server 11

10.2.1 Changed Routing Behavior

SUSE Linux Enterprise Server 10 and SUSE Linux Enterprise Server 11 set net.ipv4.conf.all.rp_filter = 1 in /etc/sysctl.conf with the intention of enabling route path filtering. However, the kernel fails to enable routing path filtering, as intended, by default in these products.

Since SLES 11 SP1, this bug is fixed and most simple single-homed unicast server setups will not notice a change. But it may cause issues for applications that relied on reverse path filtering being disabled (e.g., multicast routing or multi-homed servers).

10.2.2 Kernel Devel Packages

Starting with SUSE Linux Enterprise Server 11 Service Pack 1 the configuration files for recompiling the kernel were moved into their own sub-package:

kernel-flavor-devel

This package contains only the configuration for one kernel type (flavor), such as default or desktop.

10.3 Update from SUSE Linux Enterprise Server 11 SP1

The direct update from SUSE Linux Enterprise Server 11 SP1 to SP3 is not supported. For more information, see Section 10.1.3, “Upgrading from SLES 10 (GA and Service Packs) or SLES 11 GA”.

10.4 Update from SUSE Linux Enterprise Server 11 SP2

10.4.1 Update of python-lxml to 2.3.x

python-lxml has been updated to version 2.3.6. It brings several features and numerous bug fixes, as well as one API change:

Element.findtext() now returns an empty string instead of None for elements without text content; it still returns None when there is no element matching the request. This brings the lxml implementation of the ElementTree API in conformance with the ElementTree API specification (http://www.effbot.org/zone/pythondoc-elementtree-ElementTree.htm#elementtree.ElementTree._ElementInterface.findtext-method).

10.4.2 Postfix: Incompatibility Issues and New Features

To benefit from enhancements and improvements which have been developed in the upstream community, postfix is upgraded from version 2.5.13 to the current version 2.9.4.

Incompatibility Issues:

  • The default milter_protocol setting is increased from 2 to 6; this enables all available features up to and including Sendmail 8.14.0.

  • When a mailbox file is not owned by its recipient, the local and virtual delivery agents now log a warning and defer delivery. Specify "strict_mailbox_ownership = no" to ignore such ownership discrepancies.

  • The Postfix SMTP client(!) no longer tries to use the obsolete SSLv2 protocol by default, as this may prevent the use of modern SSL features. Lack of SSLv2 support should never be a problem, since SSLv3 was defined in 1996, and TLSv1 in 1999. You can undo the change by specifying empty main.cf values for smtp_tls_protocols and lmtp_tls_protocols.

  • Postfix SMTP server replies for address verification have changed. unverified_recipient_reject_code and unverified_sender_reject_code now handle "5XX" rejects only. The "4XX" rejects are now controlled with unverified_sender_defer_code and unverified_recipient_defer_code.

  • postfix-script, postfix-files and post-install are moved away from /etc/postfix to $daemon_directory.

  • Postfix now adds (Resent-) From:, Date:, Message-ID: or To: headers only when clients match $local_header_rewrite_clients. Specify "always_add_missing_headers = yes" for backwards compatibility.

  • The verify(8) service now uses a persistent cache by default (address_verify_map = btree:$data_directory/verify_cache). To disable, specify "address_verify_map ="

  • The meaning of an empty filter next-hop destination has changed (for example, "content_filter = foo:" or "FILTER foo:"). Postfix now uses the recipient domain, instead of using $myhostname as in Postfix 2.6 and earlier. To restore the old behavior specify "default_filter_nexthop = $myhostname", or specify a non-empty next-hop content filter destination.

  • Postfix now requests default delivery status notifications when adding a recipient with the Milter smfi_addrcpt action, instead of "never notify" as with Postfix automatically-added recipients.

  • Postfix now reports a temporary delivery error when the result of virtual alias expansion would exceed the virtual_alias_recursion_limit or virtual_alias_expansion_limit.

  • To avoid repeated delivery to mailing lists with pathological nested alias configurations, the local(8) delivery agent now keeps the owner-alias attribute of a parent alias, when delivering mail to a child alias that does not have its own owner alias.

  • The Postfix SMTP client no longer appends the local domain when looking up a DNS name without ".". Specify "smtp_dns_resolver_options = res_defnames" to get the old behavior, which may produce unexpected results.

  • The format of the "postfix/smtpd[pid]: queueid: client=host[addr]" logfile record has changed. When available, the before-filter client information and the before-filter queue ID are now appended to the end of the record.

  • Postfix by default no longer adds a "To: undisclosed-recipients:;" header when no recipient specified in the message header. For backwards compatibility, specify: "undisclosed_recipients_header = To: undisclosed-recipients:;"

  • The Postfix SMTP server now always re-computes the SASL mechanism list after successful completion of the STARTTLS command. Earlier versions only re-computed the mechanism list when the values of smtp_sasl_tls_security_options and smtp_sasl_security_options differ. This could produce incorrect results, because the Dovecot authentication server may change responses when the SMTP session is encrypted.

  • The smtpd_starttls_timeout default value is now stress-dependent. By default, TLS negotiations must now complete under overload in 10s instead of 300s.

  • Postfix no longer appends the system-supplied default CA certificates to the lists specified with *_tls_CAfile or with *_tls_CApath. This prevents third-party certificates from getting mail relay permission with the permit_tls_all_clientcerts feature. Unfortunately this change may cause compatibility problems when configurations rely on certificate verification for other purposes. Specify "tls_append_default_CA = yes" for backwards compatibility.

  • The VSTREAM error flags are now split into separate read and write error flags. As a result of this change, all programs that use Postfix VSTREAMs MUST be recompiled.

  • For consistency with the SMTP standard, the (client-side) smtp_line_length_limit default value was increased from 990 characters to 999 (i.e. 1000 characters including <CR><LF>). Specify "smtp_line_length_limit = 990" to restore historical Postfix behavior.

  • To simplify integration with third-party applications, the Postfix sendmail command now always transforms all input lines ending in <CR><LF> into UNIX format (lines ending in <LF>). Specify "sendmail_fix_line_endings = strict" to restore historical Postfix behavior.

  • To work around broken remote SMTP servers, the Postfix SMTP client by default no longer appends the "AUTH=<>" option to the MAIL FROM command. Specify "smtp_send_dummy_mail_auth = yes" to restore the old behavior.

  • Instead of terminating immediately with a "fatal" message when a database file can't be opened, a Postfix daemon program now logs an "error" message, and continues execution with reduced functionality. Logfile-based alerting systems may need to be updated to look for "error" messages in addition to "fatal" messages. Specify "daemon_table_open_error_is_fatal = yes" to get the historical behavior (immediate termination with "fatal" message).

  • Postfix now logs the result of successful TLS negotiation with TLS logging levels of 0.

  • The default inet_protocols value is now "all" instead of "ipv4", meaning use both IPv4 and IPv6. To avoid an unexpected loss of performance for sites without global IPv6 connectivity, the commands "make upgrade" and "postfix upgrade-configuration" now append "inet_protocols = ipv4" to main.cf when no explicit inet_protocols setting is already present.

New Features:

  • Support for managing multiple Postfix instances. Multi-instance support allows you to do the following and more: - Simplify post-queue content filter configuration by using separate Postfix instances before and after the filter. - Implement per-user content filters (or no filter) via transport map lookups instead of content_filter settings. - Test new configuration settings (on a different server IP address or TCP port) without disturbing production instances.

  • check_reverse_client_hostname_access, to make access decisions based on the unverified client hostname.

  • With "reject_tempfail_action = defer", the Postfix SMTP server immediately replies with a 4xx status after some temporary error.

  • The Postfix SMTP server automatically hangs up after replying with "521". This makes overload handling more effective. See also RFC 1846 for prior art on this topic.

  • Stress-dependent behavior is enabled by default. Under conditions of overload, smtpd_timeout is reduced from 300s to 10s, smtpd_hard_error_limit is reduced from 20 to 1, and smtpd_junk_command_limit is reduced from 100 to 1.

  • Specify "tcp_windowsize = 65535" (or less) to work around routers with broken TCP window scaling implementations.

  • New "lmtp_assume_final = yes" flag to send correct DSN "success" notifications when LMTP delivery is "final" as opposed to delivery into a content filter.

  • The Postfix SMTP server's SASL authentication was re-structured. With "smtpd_tls_auth_only = yes", SASL support is now activated only after a successful TLS handshake. Earlier Postfix SMTP server versions could complain about unavailable SASL mechanisms during the plaintext phase of the SMTP protocol.

  • Improved before-queue filter performance. With "smtpd_proxy_options = speed_adjust", the Postfix SMTP server receives the entire message before it connects to a before-queue content filter. This means you can run more SMTP server processes with the same number of running content filter processes, and thus, handle more mail. This feature is off by default until it is proven to create no new problems.

  • sender_dependent_default_transport_maps, a per-sender override for default_transport.

  • milter_header_checks: Support for header checks on Milter-generated message headers. This can be used, for example, to control mail flow with Milter-generated headers that carry indicators for badness or goodness. Currently, all header_checks features are implemented except PREPEND.

  • Support to turn off the TLSv1.1 and TLSv1.2 protocols. Introduced with OpenSSL version 1.0.1, these are known to cause inter-operability problems with for example hotmail. The radical workaround is to temporarily turn off problematic protocols globally: smtp_tls_protocols = !SSLv2, !TLSv1.1, !TLSv1.2 smtp_tls_mandatory_protocols = !SSLv2, !TLSv1.1, !TLSv1.2

  • Prototype postscreen(8) server that runs a number of time-consuming checks in parallel for all incoming SMTP connections, before clients are allowed to talk to a real Postfix SMTP server. It detects clients that start talking too soon, or clients that appear on DNS blocklists, or clients that hang up without sending any command.

  • Support for address patterns in DNS blacklist and whitelist lookup results.

  • The Postfix SMTP server now supports DNS-based whitelisting with several safety features: permit_dnswl_client whitelists a client by IP address, and permit_rhswl_client whitelists a client by its hostname. These features use the same syntax as reject_rbl_client and reject_rhsbl_client, respectively. The main difference is that they return PERMIT instead of REJECT.

  • The SMTP server now supports contact information that is appended to "reject" responses. This includes SMTP server responses that aren't logged to the maillog file, such as responses to syntax errors, or unsupported commands.

  • tls_disable_workarounds parameter specifies a list or bit-mask of OpenSSL bug work-arounds to disable.

  • The lower-level code in the TLS engine was simplified by removing an unnecessary layer of data copying. OpenSSL now writes directly to the network.

  • enable_long_queue_ids Introduces support for non-repeating queue IDs (also used as queue file names). These names are encoded in a mix of upper case, lower case and decimal digit characters. Long queue IDs are disabled by default to avoid breaking tools that parse logfiles and that expect queue IDs with the smaller [A-F0-9] character set.

  • memcache lookup and update support. This provides a way to share postscreen(8) or verify(8) caches between Postfix instances.

  • Support for TLS public key fingerprint matching in the Postfix SMTP client (in smtp_tls_policy_maps) and server (in check_ccert access maps).

  • Support for external SASL authentication via the XCLIENT command. This is used to accept SASL authentication from an SMTP proxy such as NGINX. This support works even without having to specify "smtpd_sasl_auth_enable = yes".

10.4.3 Binutils Update

Binutils was updated to the version 2.23.1 to support newer hardware instructions.

SUSE also added support for IBM zEnterprise EC12 and AMD btver2 processors.

This allows code to be generated to take full effect of the new instructions of the current versions of Intel x86, AMD x86 and IBM zEnterprise processor. For IBM POWER systems handling of very large binary objects has been improved.

10.4.4 unixODBC Updated to Version 2.3.1

unixODBC 2.3.1 provides the most recent upstream fixes; this helps for seamless population of DB2 data using automated tools and improves interoperability with MS SQL server.

10.4.5 stunnel Update to Version 4.54

The "stunnel" package update adds new service options for sni and tcp socket handling, improves handling in a FIPS setup and contains some performance improvements

10.4.6 Systems with HP Smart Array Controller Fail to Boot After the Update

Systems with the root file system located on a disc that is connected to a HP Smart Array Controller fail to reboot after the update to SLES 11 SP3. The kernel will wait for the root file system to appear, but will timeout and fail.

Boot the system with kernel command line option hpsa.hpsa_allow_any=1. After the system has booted successfully, add the option to the corresponding bootloader section.

For more information, see Novell TID 7014067 (http://www.novell.com/support/documentLink.do?externalID=7014067).

10.4.7 IBM Java 1.4.2 End of Life

As announced with SUSE Linux Enterprise Server 11 SP2, IBM Java 1.4.2 reached End of Life, and thus we remove support for this specific Java version with SUSE Linux Enterprise Server 11 SP3. We recommend to upgrade your environments.

10.4.8 Update from SUSE Linux Enterprise Server 11 SP2

Updating from SUSE Linux Enterprise Server 11 SP2 with AutoYaST is supported.

10.4.9 Migrating SUSE Linux Enterprise Server 11 SP2 with WebYaST Installed via wagon

For migrating SLES 11 SP2 with WebYaST installed to SP3 via wagon, it is necessary to install the WebYaST product metadata before starting the migration. To do so, make sure the packages "sle-11-SP2-WebYaST-release" and "sle-11-SP2-WebYaST-release-cd" are installed. You can ignore, if wagon reports an unknown registration status of WebYaST at the beginning of the migration.

Note
Note

Without the WebYaST product metadata installed, WebYaST will not be migrated.

The product metadata are not needed when upgrading SLES via booting the installation media.

11 Deprecated Functionality

11.1 X.Org Driver Used in UEFI Secure Boot Mode (Matrox)

The unaccelerated "mgag200"/"modesetting" (generic X.Org) driver combo is used instead of the "mga" X.Org driver if machine is running in UEFI secure boot mode. The driver does not load in other cases and a warning message is written into the kernel log.

11.2 Support for the JFS File System

In connection with the change in the JFS support status the corresponding kernel module has been moved to the extra kernel RPM (kernel-flavor-extra).

11.3 Support for Portmap to End with SLE 11 SP3

In SUSE Linux Enterprise (up version 11 SP2) we provided "rpcbind", which is compatible with portmap. "rpcbind" now provides full IPv6 support. Thus support for portmap ended with the release of SLE 11 SP3.

11.4 L3 Support for Openswan Is Scheduled to Expire

L3 support for Openswan is scheduled to expire. This decision is driven by the fact that Openswan development stalled substantially and there are no tangible signs that this will change in the future.

In contrast to this the strongSwan project is vivid and able to deliver a complete implementation of current standards. Compared to Openswan all relevant features are available by the package strongSwan plus strongSwan is the only complete Open Source implementation of the RFC 5996 IKEv2 standard whereas Openswan only implements a small mandatory subset. For now and the expected future only strongSwan qualifies to be an enterprise-ready solution for encrypted TCP/IP connectivity.

11.5 PHP 5.2 Is Deprecated

Based on significant customer demand, we are shipping PHP 5.3 parallel to PHP 5.2 with SUSE Linux Enterprise 11 SP2.

PHP 5.2 is deprecated though, and has been removed with SLE 11 SP3.

11.6 Packages Removed with SUSE Linux Enterprise Server 11 SP3

The following packages were removed with the release of SUSE Linux Enterprise Server 11 SP3:

11.6.1 Websphere AS CE Has Been Removed

Websphere AS CE is now unsupported and has been removed from SUSE Linux Enterprise Server 11 with SP3.

11.7 Packages Removed with SUSE Linux Enterprise Server 11 Service Pack 2

The following packages were removed with the release of SUSE Linux Enterprise Server 11 Service Pack 2:

hyper-v-kmp

hyper-v-kmp has been removed.

32-bit Xen Hypervisor as a Virtualization Host

The 32-bit Xen hypervisor as a virtualization host is not supported anymore. 32-bit virtual guests are not affected and fully supported with the provided 64-bit hypervisor.

11.8 Packages Removed with SUSE Linux Enterprise Server 11 Service Pack 1

The following packages were removed with the release of SUSE Linux Enterprise Server 11 Service Pack 1:

brocade-bfa

The brocade-bfa kernel module is now part of the main kernel package.

enic-kmp

The enic kernel module is now part of the main kernel package.

fnic-kmp

The fnic kernel module is now part of the main kernel package.

kvm-kmp

The KVM kernel modules are now part of the main kernel package.

java-1_6_0-ibm-x86

11.9 Packages Removed with SUSE Linux Enterprise Server 11

The following packages were removed with the major release of SUSE Linux Enterprise Server 11:

dante

JFS

The JFS file system is no longer supported and the utilities have been removed from the distribution.

EVMS

Replaced with LVM2.

ippl

powertweak

SUN Java

uw-imapd

mapped-base Functionality

The mapped-base functionality, which is used by 32-bit applications that need a larger dynamic data space (such as database management systems), has been replaced with flexmap.

zmd

11.10 Packages and Features to Be Removed in the Future

The following packages and features are deprecated and will be removed with the next Service Pack or major release of SUSE Linux Enterprise Server:

  • The reiserfs file system is fully supported for the lifetime of SUSE Linux Enterprise Server 11 specifically for migration purposes. We will however remove support for creating new reiserfs file systems starting with SUSE Linux Enterprise Server 12.

  • The sendmail package is deprecated and might be discontinued with SUSE Linux Enterprise Server 12.

  • The lprng package is deprecated and will be discontinued with SUSE Linux Enterprise Server 12.

  • The dhcpv6 package is deprecated and will be discontinued with SUSE Linux Enterprise Server 12.

  • The qt3 package is deprecated and will be discontinued with SUSE Linux Enterprise Server 12.

  • syslog-ng will be replaced with rsyslog.

  • The smpppd package is deprecated and will be discontinued with one of the next Service Packs or SUSE Linux Enterprise Server 12.

  • The raw block devices (major 162) are deprecated and will be discontinued with one of the next Service Packs or SUSE Linux Enterprise Server 12.

12 Infrastructure, Package and Architecture Specific Information

12.1 16TB memory support for PPC64

We now support up to 16TB of memory for PPC64.

12.2 Systems Management

12.2.1 Samba: Recursiveness for smbcacls

Improve usability by allowing a single execution of smbcacls to propagate an ACL recursively (as appropriate) to each node in a directory tree according to its inheritance flags.

Add support for a new smbcacls option '--propagate-inheritance', to be used with the existing --set, --modify, --add, or --delete arguments. For a single invocation of the smbcalcs command called with '--propagate-inheritance', the --set, --modify, --add, or --delete operations are applied firstly to the directory specified, then any inheritiable ACE(s) are automatically propagated recursively down the directory structure.

For more information, see the updated man page for smbcacls (in particular the INHERITANCE section).

12.2.2 Providing the URL of an Add-on Media at the Command Line during Installation

Add-on media like the Software Development Kit or third party driver media can be added to SUSE Linux Enterprise during installation or later in the running system. Sometimes it's advisable that an add-on media is available from the very beginning, for example to make drivers for new hardware available.

It is now possible to provide one or more URLs that point to the location of add-on media at the installer's command line by providing an "addon=url" parameter. Multiple add-ons need to be provided as a comma-separated list ("addon=url1,url2,...").

12.2.3 Snapper Enhancements

Snapper, which was introduced in previous service pack, has been implemented following enhancements:

  • snapshots can be managed also by non-root users

  • the performance of snapshots comparison has been improved

  • snapper provides a D-Bus interface for better integration into other applications

  • added support for LVM Thin Provisioning

For more information, see the Administration Guide.

12.2.4 Modified Operation against Novell Customer Center

Effective on 2009-01-13, provisional registrations have been disabled in the Novell Customer Center. Registering an instance of SUSE Linux Enterprise Server or Open Enterprise Server (OES) products now requires a valid, entitled activation code. Evaluation codes for reviews or proofs of concept can be obtained from the product pages and from the download pages on novell.com.

If a device is registered without a code at setup time, a provisional code is assigned to it by Novell Customer Center (NCC), and it will be entered in your NCC list of devices. No update repositories are assigned to the device at this time.

Once you are ready to assign a code to the device, start the YaST Novell Customer Center registration module and replace the un-entitled provisional code that NCC generated with the appropriate one to fully entitle the device and activate the related update repositories.

12.2.5 Operation against Subscription Management Tool

Operation under the Subscription Management Tool (SMT) package and registration proxy is not affected. Registration against SMT will assign codes automatically from your default pool in NCC until all entitlements have been assigned. Registering additional devices once the pool is depleted will result in the new device being assigned a provisional code (with local access to updates) The SMT server will notify the administrator that these new devices need to be entitled.

12.2.6 Minimal Pattern

The minimal pattern provided in YaST's Software Selection dialog targets experienced customers and should be used as a base for your own specific software selections.

Do not expect a minimal pattern to provide a useful basis for your business needs without installing additional software.

This pattern does not include any dump or logging tools. To fully support your configuration, Novell Technical Services (NTS) will request installation of all tools needed for further analysis in case of a support request.

12.2.7 SPident

SPident is a tool to identify the Service Pack level of the current installation. On SUSE Linux Enterprise Server 11 GA, this tool has been replaced by the new SAM tool (package "suse-sam").

12.3 Performance Related Information

12.3.1 Oracle and XFS File System

Oracle operates using direct I/O on preallocated files. There is no page cache writeback, no block allocation, and no file size changes. When using the XFS file system you need to tune the system with kernel parameters to get good performance.

For more information, see http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm#BHCCADGD (http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm#BHCCADGD).

12.3.2 Linux Completely Fair Scheduler Affects Java Performance

Problem (Abstract)

Java applications that use synchronization extensively might perform poorly on Linux systems that include the Completely Fair Scheduler. If you encounter this problem, there are two possible workarounds.

Symptom

You may observe extremely high CPU usage by your Java application and very slow progress through synchronized blocks. The application may appear to hang due to the slow progress.

Cause

The Completely Fair Scheduler (CFS) was adopted into the mainline Linux kernel as of release 2.6.23. The CFS algorithm is different from previous Linux releases. It might change the performance properties of some applications. In particular, CFS implements sched_yield() differently, making it more likely that a thread that yields will be given CPU time regardless.

The new behavior of sched_yield() might adversely affect the performance of synchronization in the IBM JVM.

Environment

This problem may affect IBM JDK 5.0 and 6.0 (all versions) running on Linux kernels that include the Completely Fair Scheduler, including Linux kernel 2.6.27 in SUSE Linux Enterprise Server 11.

Resolving the Problem

If you observe poor performance of your Java application, there are two possible workarounds:

  • Either invoke the JVM with the additional argument "-Xthr:minimizeUserCPU".

  • Or configure the Linux kernel to use the more backward-compatible heuristic for sched_yield() by setting the sched_compat_yield tunable kernel property to 1. For example:

    echo "1" > /proc/sys/kernel/sched_compat_yield

You should not use these workarounds unless you are experiencing poor performance.

12.3.3 Tuning Performance of Simple Database Engines

Simple database engines like Berkeley DB use memory mappings (mmap(2)) to manipulate database files. When the mapped memory is modified, those changes need to be written back to disk. In SUSE Linux Enterprise 11, the kernel includes modified mapped memory in its calculations for deciding when to start background writeback and when to throttle processes which modify additional memory. (In previous versions, mapped dirty pages were not accounted for and the amount of modified memory could exceed the overall limit defined.) This can lead to a decrease in performance; the fix is to increase the overall limit.

The maximum amount of dirty memory is 40% in SUSE Linux Enterprise 11 by default. This value is chosen for average workloads, so that enough memory remains available for other uses. The following settings may be relevant when tuning for database workloads:

  • vm.dirty_ratio

    Maximum percentage of dirty system memory (default 40).

  • vm.dirty_background_ratio

    Percentage of dirty system memory at which background writeback will start (default 10).

  • vm.dirty_expire_centisecs

    Duration after which dirty system memory is considered old enough to be eligible for background writeback (in centiseconds).

These limits can be observed or modified with the sysctl utility (see sysctl(1) and sysctl.conf(5)).

12.4 Storage

12.4.1 SUSE Enterprise Storage (Powered by Ceph) Client

SUSE Linux Enterprise Server 11 SP3 and SP4 now provides the functionality to act as a client for SUSE Enterprise Storage. qemu can now use storage provided by the SUSE Enterprise Storage Ceph cluster via the RADOS Block Device (rbd) backend. Applications can now be enhanced to directly incorporate object or block storage backed by the SUSE Enterprise Storage cluster, by linking with the librados and librbd client libraries.

Also included is the rbd tool to manage RADOS block devices mapped via the rbd kernel module, for use as a standard generic block device.

12.4.2 Improved Support for Intel RSTe

This Service Pack adds improved support for Intel Rapid Storage Technology Enterprise (RSTe). It now supports RAID levels 0,1,4,5,6 and 10.

12.4.3 Define disk order for MD Raid with YaST

This enables to specify the disk order if a RAID device is created. Thus you can influence which data of the RAID is written on which disk.

12.4.4 Multipath Configuration Change

With the update to version 0.4.9 on SLES 11 SP2, rr_min_io is replaced by rr_min_io_rq in multipath.conf. The old option is now ignored. Check this setting, if you encounter performance issues.

For more information, see the Storage Administration Guide shipped with SLES 11 SP3.

12.4.5 Capturing kdump on a Target using Devicemapper (Incl. Multipath)

If the root device is not using devicemapper (multipath), as a workaround add additional parameters to KDUMP_COMMANDLINE_APPEND in /etc/sysconfig/kdump, to capture kdump on a target that is using devicemapper (multipath):

KDUMP_COMMANDLINE_APPEND="root_no_dm=1 root_no_mpath=1"

Then start the kdump service.

If you use multipath for both root and kdump, these options must not be added.

An example use case with System z could be a kdump target on multipath zfcp-attached SCSI devices and a root file system on DASD.

12.4.6 Multipathing: SCSI Hardware Handler

Some storage devices, e.g. IBM DS4K, require special handling for path failover and failback. In SUSE Linux Enterprise Server 10 SP2, dm layer served as hardware handler.

One drawback of this implementation was that the underlying SCSI layer did not know about the existence of the hardware handler. Hence, during device probing, SCSI would send I/O on the passive path, which would fail after a timeout and also print extraneous error messages in the console.

In SUSE Linux Enterprise Server 11, this problem is resolved by moving the hardware handler to the SCSI layer, hence the term SCSI Hardware Handler. These handlers are modules created under the SCSI directory in the Linux Kernel.

In SUSE Linux Enterprise Server 11, there are four SCSI Hardware Handlers: scsi_dh_alua, scsi_dh_rdac, scsi_dh_hp_sw, scsi_dh_emc.

These modules need to be included in the initrd image so that SCSI knows about the special handling during probe time itself.

To do so, carry out the following steps:

  • Add the device handler modules to the INITRD_MODULES variable in /etc/sysconfig/kernel

  • Create a new initrd with:

    mkinitrd -k /boot/vmlinux-<flavour> \
    -i /boot/initrd-<flavour>-scsi_dh \
    -M /boot/System.map-<flavour>
  • Update the grub.conf/lilo.conf/yaboot.conf file with the newly built initrd.

  • Reboot.

12.4.7 Local Mounts of iSCSI Shares

An iSCSI shared device should never be mounted directly on the local machine. In an OCFS2 environment, doing so causes all hardware to hard hang.

12.5 Hyper-V

12.5.1 Hyper-V: Driver to Support Host Initiated Backup

This driver supports a host initiated backup of the guest. On Windows guests, the host can generate application consistent backups using the Windows VSS framework. On Linux, we ensure that the backup will be file system consistent. This driver allows the host to initiate a "Freeze" operation on all the mounted file systems in the guest. Once the mounted file systems in the guest are frozen, the host snapshots the guest's file systems. Once this is done, the guest's file systems are "thawed".

12.5.2 Hyper-V: Framebuffer Driver

The guest window size was limited to standard VGA resolutions. To select a resolution the guest had to be booted with the "vga=number" kernel command line option.

There is now a framebuffer driver for Hyper-V guests. It allows for screen resolution up to Full HD 1920x1080 on Windows Server 2012 host, and 1600x1200 on Windows Server 2008 R2 or earlier.

When upgrading from earlier releases the "vga=number" option has to be replaced with the "video=hyperv_fb:resolution" option to specifiy the desired guest window size. Example: To force the guest window size to 800x600 add "video=hyperv_fb:800x600" to the kernel command line options.

12.5.3 Hyper-V: Update the Vmbus protocol

This feature brings our driver to the Win8 (Windows Server 2012) level. This code will dynamically negotiate the most efficient protocol that the host can support - the same code can be deployed on all supported hosts (WS2008, WS2008R2 and WS2012). Following are some of the key features implemented in this patch-set:

  • More efficient signaling protocol between the host and the guest

  • Distribution of interrupt load across available CPUs in the guest

  • Per-channel interrupt binding (as part of item 2)

  • More efficient demultiplexing of incoming interrupts

  • Per-channel signaling mechanism for host to guest communication

12.5.4 Hyper-V: Memory Ballooning Support

Windows hosts dynamically manage the guest memory allocation via a combination memory hot add and ballooning. Memory hot add is used to grow the guest memory upto the maximum memory that can be allocated to the guest. Ballooning is used to both shrink as well as expand up to the max memory.

12.5.5 Hyper-V: KVP IP Injection

Hyper-V now supports the KVP (Key Value Pair) functionality to implement the mechanism to GET/SET IP addresses in the guest. This functionality is used in Windows Server 2012 to implement VM replication functionality.

12.5.6 Xen Support for Booting the Hypervisor to UEFI X64

The hypervisor is now able to boot to UEFI.

12.5.7 Hyper-V: Time Synchronization

The system time of a guest will drift several seconds per day.

To maintain an accurate system time it is recommended to run ntpd in a guest. The ntpd daemon can be configured with the YaST "NTP Client" module. In addition to such a configuration, the following two variables must be set manually to "yes" in /etc/sysconfig/ntp:

NTPD_FORCE_SYNC_ON_STARTUP="yes"
NTPD_FORCE_SYNC_HWCLOCK_ON_STARTUP="yes"

12.5.8 Change of Kernel Device Names in Hyper-V Guests

Starting with SP2, SLES 11 has a newer block device driver, which presents all configured virtual disks as SCSI devices. Disks, which used to appear as /dev/hda in SLES 11 SP1 will from now on appear as /dev/sda.

12.5.9 Using the "Virtual Machine Snapshot" Feature

The Windows Server Manager GUI allows to take snapshots of a Hyper-V guest. After a snapshot is taken the guest will fail to reboot. By default, the guest's root file system is referenced by the serial number of the virtual disk. This serial number changes with each snapshot. Since the guest expects the initial serial number, booting will fail.

The solution is to either delete all snapshots using the Windows GUI, or configure the guest to mount partitions by file system UUID. This change can be made with the YaST partitioner and boot loader configurator.

12.6 Architecture Independent Information

12.6.1 Change of libzypp History

The libzypp history in /var/log/zypp/history now contains a transaction ID added to each record. Any scripts, which parse the history file and which rely on the order of data fields, need to be checked that they still parse the history file properly.

12.6.2 Changes in Packaging and Delivery

12.6.2.1 ntp 4.2.8

ntp was updated to version 4.2.8.

  • The ntp server ntpd does not synchronize with its peers anymore and the peers are specified by their host name in /etc/ntp.conf.

  • The output of ntpq --peers lists IP numbers of the remote servers instead of their host names.

Name resolution for the affected hosts works otherwise.

Parameter changes

The meaning of some parameters for the sntp commandline tool have changed or have been dropped, for example sntp -s is now sntp -S. Review any sntp usage in your own scripts for required changes.

After having been deprecated for several years, ntpdc is now disabled by default for security reasons. It can be re-enabled by adding the line enable mode7 to /etc/ntp.conf, but preferably ntpq should be used instead.

12.6.2.2 Updating tcsh

tcsh 6.15 has a locking issue when used concurrently.

On SLE 11 SP3, SUSE updated tcsh to version 6.18 to solve a locking issue when used concurrently.

12.6.2.3 Place New Windows Always on Top

On the default Gnome desktop using the default window manager Metacity, place new windows always on top.

This can be configured with the new /apps/metacity/general/new_windows_always_on_top preference. When set, new windows are always placed on top, even if they are denied focus.

This is useful on large screens and multihead setups where the tasklist can be hard to notice and difficult to access with the mouse, so the normal behavior of flashing in the tasklist is less effective.

12.6.2.4 Updating to Firefox 24 ESR

Firefox was updated to version 24 ESR.

This update also brings updates of Mozilla NSPR and Mozilla NSS libraries. Mozilla NSS libraries contain cryptographic enhancements, including TLS 1.2 support.

It comes with PDF.js, which now replaces the Acroread PDF plugin.

12.6.2.5 Support for 46bit Memory Addressing in makedumpfile and crash

Starting with SP3, the makedumpfile and crash utilities can analyze memory dumps taken on systems with 46bit addresses.

12.6.2.6 Video and Stream Processing

To support video and stream processing the v4l tools and gstreamer-plugins were added.

12.6.2.7 New or Removed Packages

New Packages (Compared with SLES 11 SP2 GA):

  • apache2-mod_auth_kerb

  • apache2-mod_security2

  • cachefilesd

  • cgdcbxd

  • createrepo

  • dapl-debug

  • grub2-x86_64-efi

  • gstreamer-0_10-plugins-v4l

  • libguestfs

  • ipset

  • IBM Java 1.7

  • kernelshark

  • libapr-util1-dbd-sqlite3

  • libboost_thread1_36_0

  • libbtrfs0

  • libconfig9

  • libecpg6

  • libgcc_s1

  • libgcc_s1-32bit

  • libgcc_s1-x86

  • libgnutls-extra26

  • libgomp1

  • libgomp1-32bit

  • libipset2

  • libmnl0

  • libnetfilter_queue1

  • libnfnetlink0

  • libopenscap1

  • libossp-uuid16

  • libpq5

  • libpq5-32bit

  • libsanlock1

  • libseccomp1

  • libsnapper2

  • libsoftokn3

  • libsoftokn3-32bit

  • libsoftokn3-x86

  • libsss_idmap0

  • libstdc++6

  • libstdc++6-32bit

  • libstdc++6-x86

  • libv4l

  • libv4l1-0

  • libv4l1-0-32bit

  • libv4l2-0

  • libv4l2-0-32bit

  • libv4lconvert0

  • libv4lconvert0-32bit

  • libvirt-lock-sanlock

  • mokutil (x86_64 only)

  • nut-drivers-net

  • OpenIPMI-python

  • openscap

  • openscap-content

  • openscap-utils

  • perl-Module-Build

  • perl-String-ShellQuote

  • perl-Sys-Virt

  • perl-Test-Simple

  • pesign

  • pesign-obs-integration

  • postgresql91

  • postgresql91-contrib

  • postgresql91-docs

  • postgresql91-server

  • postgresql-init

  • python-configobj

  • python-configshell

  • python-configshell-doc

  • python-deltarpm

  • python-ipaddr

  • python-netifaces

  • python-ordereddict

  • python-pyasn1

  • python-rtslib

  • python-sanlock

  • python-simpleparse

  • python-urwid

  • sanlock

  • sces-client

  • shim (x86_64 only)

  • targetcli

  • tipcutils

  • trace-cmd

  • unixODBC_23

  • yast2-iscsi-lio-server

  • yast2-lxc

  • yum-common

  • yum-metadata-parser

Removed Packages (Compared with SLES 11 SP2 GA):

  • php-5.2

  • IBM Java 1.4.2

  • openswan

  • portmap

  • tvflash

  • websphere-as_ce

12.6.2.8 Python Updated to Version 2.6.8 with "collections.OrderedDict" Functionality

The "OrderedDict" functionality ensures that Python dictionaries emitted for conversion into strings maintain their original order. This functionality is important for data analytics applications.

12.6.2.9 Postfix: Incompatibility Issues and New Features

To benefit from enhancements and improvements which have been developed in the upstream community, postfix is upgraded from version 2.5.13 to the current version 2.9.4.

Incompatibility Issues:

  • The default milter_protocol setting is increased from 2 to 6; this enables all available features up to and including Sendmail 8.14.0.

  • When a mailbox file is not owned by its recipient, the local and virtual delivery agents now log a warning and defer delivery. Specify "strict_mailbox_ownership = no" to ignore such ownership discrepancies.

  • The Postfix SMTP client(!) no longer tries to use the obsolete SSLv2 protocol by default, as this may prevent the use of modern SSL features. Lack of SSLv2 support should never be a problem, since SSLv3 was defined in 1996, and TLSv1 in 1999. You can undo the change by specifying empty main.cf values for smtp_tls_protocols and lmtp_tls_protocols.

  • Postfix SMTP server replies for address verification have changed. unverified_recipient_reject_code and unverified_sender_reject_code now handle "5XX" rejects only. The "4XX" rejects are now controlled with unverified_sender_defer_code and unverified_recipient_defer_code.

  • postfix-script, postfix-files and post-install are moved away from /etc/postfix to $daemon_directory.

  • Postfix now adds (Resent-) From:, Date:, Message-ID: or To: headers only when clients match $local_header_rewrite_clients. Specify "always_add_missing_headers = yes" for backwards compatibility.

  • The verify(8) service now uses a persistent cache by default (address_verify_map = btree:$data_directory/verify_cache). To disable, specify "address_verify_map ="

  • The meaning of an empty filter next-hop destination has changed (for example, "content_filter = foo:" or "FILTER foo:"). Postfix now uses the recipient domain, instead of using $myhostname as in Postfix 2.6 and earlier. To restore the old behavior specify "default_filter_nexthop = $myhostname", or specify a non-empty next-hop content filter destination.

  • Postfix now requests default delivery status notifications when adding a recipient with the Milter smfi_addrcpt action, instead of "never notify" as with Postfix automatically-added recipients.

  • Postfix now reports a temporary delivery error when the result of virtual alias expansion would exceed the virtual_alias_recursion_limit or virtual_alias_expansion_limit.

  • To avoid repeated delivery to mailing lists with pathological nested alias configurations, the local(8) delivery agent now keeps the owner-alias attribute of a parent alias, when delivering mail to a child alias that does not have its own owner alias.

  • The Postfix SMTP client no longer appends the local domain when looking up a DNS name without ".". Specify "smtp_dns_resolver_options = res_defnames" to get the old behavior, which may produce unexpected results.

  • The format of the "postfix/smtpd[pid]: queueid: client=host[addr]" logfile record has changed. When available, the before-filter client information and the before-filter queue ID are now appended to the end of the record.

  • Postfix by default no longer adds a "To: undisclosed-recipients:;" header when no recipient specified in the message header. For backwards compatibility, specify: "undisclosed_recipients_header = To: undisclosed-recipients:;"

  • The Postfix SMTP server now always re-computes the SASL mechanism list after successful completion of the STARTTLS command. Earlier versions only re-computed the mechanism list when the values of smtp_sasl_tls_security_options and smtp_sasl_security_options differ. This could produce incorrect results, because the Dovecot authentication server may change responses when the SMTP session is encrypted.

  • The smtpd_starttls_timeout default value is now stress-dependent. By default, TLS negotiations must now complete under overload in 10s instead of 300s.

  • Postfix no longer appends the system-supplied default CA certificates to the lists specified with *_tls_CAfile or with *_tls_CApath. This prevents third-party certificates from getting mail relay permission with the permit_tls_all_clientcerts feature. Unfortunately this change may cause compatibility problems when configurations rely on certificate verification for other purposes. Specify "tls_append_default_CA = yes" for backwards compatibility.

  • The VSTREAM error flags are now split into separate read and write error flags. As a result of this change, all programs that use Postfix VSTREAMs MUST be recompiled.

  • For consistency with the SMTP standard, the (client-side) smtp_line_length_limit default value was increased from 990 characters to 999 (i.e. 1000 characters including <CR><LF>). Specify "smtp_line_length_limit = 990" to restore historical Postfix behavior.

  • To simplify integration with third-party applications, the Postfix sendmail command now always transforms all input lines ending in <CR><LF> into UNIX format (lines ending in <LF>). Specify "sendmail_fix_line_endings = strict" to restore historical Postfix behavior.

  • To work around broken remote SMTP servers, the Postfix SMTP client by default no longer appends the "AUTH=<>" option to the MAIL FROM command. Specify "smtp_send_dummy_mail_auth = yes" to restore the old behavior.

  • Instead of terminating immediately with a "fatal" message when a database file can't be opened, a Postfix daemon program now logs an "error" message, and continues execution with reduced functionality. Logfile-based alerting systems may need to be updated to look for "error" messages in addition to "fatal" messages. Specify "daemon_table_open_error_is_fatal = yes" to get the historical behavior (immediate termination with "fatal" message).

  • Postfix now logs the result of successful TLS negotiation with TLS logging levels of 0.

  • The default inet_protocols value is now "all" instead of "ipv4", meaning use both IPv4 and IPv6. To avoid an unexpected loss of performance for sites without global IPv6 connectivity, the commands "make upgrade" and "postfix upgrade-configuration" now append "inet_protocols = ipv4" to main.cf when no explicit inet_protocols setting is already present.

New Features:

  • Support for managing multiple Postfix instances. Multi-instance support allows you to do the following and more: - Simplify post-queue content filter configuration by using separate Postfix instances before and after the filter. - Implement per-user content filters (or no filter) via transport map lookups instead of content_filter settings. - Test new configuration settings (on a different server IP address or TCP port) without disturbing production instances.

  • check_reverse_client_hostname_access, to make access decisions based on the unverified client hostname.

  • With "reject_tempfail_action = defer", the Postfix SMTP server immediately replies with a 4xx status after some temporary error.

  • The Postfix SMTP server automatically hangs up after replying with "521". This makes overload handling more effective. See also RFC 1846 for prior art on this topic.

  • Stress-dependent behavior is enabled by default. Under conditions of overload, smtpd_timeout is reduced from 300s to 10s, smtpd_hard_error_limit is reduced from 20 to 1, and smtpd_junk_command_limit is reduced from 100 to 1.

  • Specify "tcp_windowsize = 65535" (or less) to work around routers with broken TCP window scaling implementations.

  • New "lmtp_assume_final = yes" flag to send correct DSN "success" notifications when LMTP delivery is "final" as opposed to delivery into a content filter.

  • The Postfix SMTP server's SASL authentication was re-structured. With "smtpd_tls_auth_only = yes", SASL support is now activated only after a successful TLS handshake. Earlier Postfix SMTP server versions could complain about unavailable SASL mechanisms during the plaintext phase of the SMTP protocol.

  • Improved before-queue filter performance. With "smtpd_proxy_options = speed_adjust", the Postfix SMTP server receives the entire message before it connects to a before-queue content filter. This means you can run more SMTP server processes with the same number of running content filter processes, and thus, handle more mail. This feature is off by default until it is proven to create no new problems.

  • sender_dependent_default_transport_maps, a per-sender override for default_transport.

  • milter_header_checks: Support for header checks on Milter-generated message headers. This can be used, for example, to control mail flow with Milter-generated headers that carry indicators for badness or goodness. Currently, all header_checks features are implemented except PREPEND.

  • Support to turn off the TLSv1.1 and TLSv1.2 protocols. Introduced with OpenSSL version 1.0.1, these are known to cause inter-operability problems with for example hotmail. The radical workaround is to temporarily turn off problematic protocols globally: smtp_tls_protocols = !SSLv2, !TLSv1.1, !TLSv1.2 smtp_tls_mandatory_protocols = !SSLv2, !TLSv1.1, !TLSv1.2

  • Prototype postscreen(8) server that runs a number of time-consuming checks in parallel for all incoming SMTP connections, before clients are allowed to talk to a real Postfix SMTP server. It detects clients that start talking too soon, or clients that appear on DNS blocklists, or clients that hang up without sending any command.

  • Support for address patterns in DNS blacklist and whitelist lookup results.

  • The Postfix SMTP server now supports DNS-based whitelisting with several safety features: permit_dnswl_client whitelists a client by IP address, and permit_rhswl_client whitelists a client by its hostname. These features use the same syntax as reject_rbl_client and reject_rhsbl_client, respectively. The main difference is that they return PERMIT instead of REJECT.

  • The SMTP server now supports contact information that is appended to "reject" responses. This includes SMTP server responses that aren't logged to the maillog file, such as responses to syntax errors, or unsupported commands.

  • tls_disable_workarounds parameter specifies a list or bit-mask of OpenSSL bug work-arounds to disable.

  • The lower-level code in the TLS engine was simplified by removing an unnecessary layer of data copying. OpenSSL now writes directly to the network.

  • enable_long_queue_ids Introduces support for non-repeating queue IDs (also used as queue file names). These names are encoded in a mix of upper case, lower case and decimal digit characters. Long queue IDs are disabled by default to avoid breaking tools that parse logfiles and that expect queue IDs with the smaller [A-F0-9] character set.

  • memcache lookup and update support. This provides a way to share postscreen(8) or verify(8) caches between Postfix instances.

  • Support for TLS public key fingerprint matching in the Postfix SMTP client (in smtp_tls_policy_maps) and server (in check_ccert access maps).

  • Support for external SASL authentication via the XCLIENT command. This is used to accept SASL authentication from an SMTP proxy such as NGINX. This support works even without having to specify "smtpd_sasl_auth_enable = yes".

12.6.2.10 Postfix Banner Less Verbose

The SMTP MTA banner sent to the client upon connection was too verbose and could help attackers to more easily exploit security vulnerabilities.

The SMTP MTA banner sent to the client upon connection is less verbose now. It does not print the services name and version number anymore.

12.6.2.11 IBM Java 1.4.2 End of Life

As announced with SUSE Linux Enterprise Server 11 SP2, IBM Java 1.4.2 reached End of Life, and thus we remove support for this specific Java version with SUSE Linux Enterprise Server 11 SP3. We recommend to upgrade your environments.

12.6.2.12 Ftrace Linux Kernel Internal Tracer Enablement

trace-cmd is now provided to make ftrace kernel facility accessible to SLE users. See trace-cmd(1) manual page and /usr/src/linux/Documentation/trace/ftrace.txt for more details.

12.6.2.13 SUSE Linux Enterprise High Availability Extension 11

With the SUSE Linux Enterprise High Availability Extension 11, SUSE offers the most modern open source High Availability Stack for Mission Critical environments.

12.6.2.14 Kernel Has Memory Cgroup Support Enabled By Default

While this functionality is welcomed in most environments, it requires about 1% of memory. Memory allocation is done at boot time and is using 40 Bytes per 4 KiB page which results in 1% of memory.

In virtualized environments, specifically but not exclusively on s390x systems, this may lead to a higher basic memory consumption: e.g., a 20GiB host with 200 x 1GiB guests consumes 10% of the real memory.

This memory is not swappable by Linux itself, but the guest cgroup memory is pageable by a z/VM host on an s390x system and might be swappable on other hypervisors as well.

Cgroup memory support is activated by default but it can be deactivated by adding the Kernel Parameter cgroup_disable=memory

A reboot is required to deactivate or activate this setting.

12.6.2.15 Kernel Development Files Moved to Individual kernel-$flavor-devel Packages

Up to SLE 11 GA, the kernel development files (.config, Module.symvers, etc.) for all flavors were packaged in a single kernel-syms package. Starting with SLE 11 SP1, these files are packaged in individual kernel-$flavor-devel packages, allowing to build KMPs for only the required kernel flavors. For compatibility with existing spec files, the kernel-syms package still exists and depends on the individual kernel-$flavor-devel packages.

12.6.2.16 Live Migration of KVM Guest with Device Hot-Plugging

Hot-plugging a device (network, disk) works fine for a KVM guest on a SLES 11 host since SP1. However, migrating the same guest with the hotplugged device (available on the destination host) fails.

Since SLES 11 SP1, supports the hotplugging of the device to the KVM guest, but migrating the guest with the hot-plugged device is not supported and expected to fail.

12.6.3 Security

12.6.3.1 openldap2-client 2.4: New Options

These new options are especially noteworthy:

  1. Specify the handshake protocol and the strength of minimally acceptable SSL/TLS ciphers for the operation of OpenLDAP server.

  2. Specify the handshake protocol and the strength of proposed SSL/TLS ciphers for the operation of OpenLDAP client.

General information:

The parameter "TlsParameterMin" helps both use cases. The parameter value controls both handshake protocol and cipher strength. The interpretation of the value by server and client is identical, however the parameter name appears differently in server's and client's configuration files.

The value format is "X.Y" where X and Y are single digits:

  • If X is 2, handshake is SSLv2, the usable ciphers are SSLv2 and up.

  • If X is 3, handshake is TLSv1.0 (SLES 11) or TLSv1.2 (SLES 12), the usable ciphers are TLSv1.(Y-1) and up.

Examples:

  • 2.0 - Handshake is SSLv2, usable ciphers are SSLv2, SSLv3, and TLSv1.x

  • 2.1 - Same as above

  • 3.1 - Handshake is TLSv1.0 (SLES 11), usable ciphers are SSLv3 and up.

  • 3.2 - Handshake is TLSv1.0 (SLES 11), usable ciphers are TLSv1.1 and up.

Important: OpenSSL identifies TLSv1.0 ciphers as "SSLv3", if the parameter value prohibits SSLv3 operation, then TLSv1.0 ciphers will be rejected too, and vice versa.

Use case 1:

Supported by SLES 12 only. SLES 11 is too old to support this use case. Add parameter TLSProtocolMin to slapd.conf and restart server.

Example - reject SSLv2 handshake, accept TLSv1.0 handshake and TLSv1.x ciphers:

TLSProtocolMin 3.1

Use case 2:

Supported by both SLE 12 and SLE 11 server and desktop products. Add parameter TLS_PROTOCOL_MIN to either /etc/openldap/ldap.conf or ~/.ldaprc.

Example - do not use SSLv2 handshake, use TLSv1.0 handshake, and propose SSLv3 and TLSv1.x ciphers:

TLS_PROTOCOL_MIN 3.1

Debug tips for Client operation:

Run ldap client programs with debug level 5 (-d 5) will trace TLS operations. Be aware that OpenSSL will misleadingly print this message:

SSL_connect:SSLv2/v3 write client hello A

which apparently suggests the usage of SSLv2, but in fact OpenSSL has not decided on the handshake protocol yet!

References:

  • Original feature commit by OpenLDAP developers: http://www.openldap.org/its/index.cgi/Software%20Enhancements?id=5655

  • OpenLDAP client configuration manual: http://man7.org/linux/man-pages/man5/ldap.conf.5.html

  • OpenLDAP server configuration manual (note the lack of TlsProtocolMin usage instruction): http://www.openldap.org/doc/admin24/tls.html

12.6.3.2 TLS 1.2 for OpenVPN

openvpn as it is shipped in SUSE Linux Enterprise 11 does not offer GCM ciphers and has also no TLS 1.2 support. This is due to the old openssl 0.9.8j which just does not have these ciphers.

There is now an additional openvpn-openssl1 package that is linked against openssl1 in the SLE 11 Security Module. This openvpn-openssl1 package is meant as a drop-in replacement for the regular openvpn package and uses the same configuration files. This way TLS 1.2 is available for OpenVPN.

12.6.3.3 OpenSSL Version 1 Enabled OpenSSH

The SUSE Linux Enterprise 11 version of OpenSSH does not support AES-GCM ciphers.

We now provide a OpenSSH version built against OpenSSL 1, which supports AES-GCM ciphers, a modern and commonly used and required cipher.

The package is called "openssh-openssl1" and is contained in the SLE 11 Security module, which needs to be enabled separately.

12.6.3.4 Removable Media

To allow a specific user (joe) to mount removable media, run the following command as root:

polkit-auth --user joe \
--grant org.freedesktop.hal.storage.mount-removable
   

To allow all locally logged in users on the active console to mount removable media, run the following commands as root:

echo 'org.freedesktop.hal.storage.mount-removable no:no:yes' \
  >> /etc/polkit-default-privs.local
/sbin/set_polkit_default_privs

12.6.3.5 Verbose Audit Records for System User Management Tools

Install the package "pwdutils-plugin-audit". To enable this plugin, add "audit" to /etc/pwdutils/logging. See the Security Guide for more information.

12.6.4 Networking

12.6.4.1 openssl1 Enablement

Customers require TLS 1.2 support in the openssl1 library, partially for their own programs, but also for selected SUSE ones.

We provide openssl1 enablement packages in a separate repository.

12.6.4.2 Providing TLS 1.2 Support for Apache2 Via mod_nss

The Apache Web server offers HTTPS protocol support via mod_ssl, which in turn uses the openssl shared libraries. SUSE Linux Enterprise Server 11 SP2 and SP3 come with openssl version 0.9.8j. This openssl version supports TLS version up to and including TLSv1.0, support for newer TLS versions like 1.1 or 1.2 is missing.

Recent recommendations encourage the use of TLSv1.2, specifically to support Perfect Forward Secrecy. To overcome this limitation, the SUSE Linux Enterprise Server 11 SP2, SP3, and SP4 are supplied with upgrades to recent versions of the mozilla-nss package and with the package apache2-mod_nss, which makes use of mozilla-nss for TLSv1.2 support for the Apache Web server.

An additional mod_nss module is supplied for apache2, which can coexist with all existing libraries and apache2 modules. This module uses the mozilla netscape security services library, which supports TLS 1.1 and TLS 1.2 protocols. It is not a drop-in replacement; configuration and certificate storages are different. It can coexist with mod_ssl if necessary.

The package includes a sample configuration and a README-SUSE.txt for setup guidance.

12.6.4.3 Kerberos User to System User mapping for NFSv4

SLES (up to SP2) did not support a Kerberos User to System User mapping functionality for NFSv4 with Kerberos authentification (nsswitch method). This functionality is similar to the NFSv4 standard user mapping functionality, but it is meant specifically for Kerberos users.

nfsidmap was upgraded to fix this as described in:

12.6.4.4 Bind Update to Version 9.9

The DNS Server Bind has been updated to the long term supported version 9.9 for longer stability going forward. In version 9.9, the commands 'dnssec-makekeyset' and 'dnssec-signkey' are not available anymore.

DNSSEC tools provided by Bind 9.2.4 are not compatible with Bind 9.9 and later and have been replaced where applicable. Specifically, DNSSEC-bis functionality removes the need for dnssec-signkey(1M) and dnssec-makekeyset(1M); dnssec-keygen(1M) and dnssec-signzone(1M) now provide alternative functionality.

For more information, see TID 7012684 (https://www.suse.com/support/kb/doc.php?id=7012684) (https://www.suse.com/support/kb/doc.php?id=7012684).

12.6.4.5 Enabling NFS 4.1 for nfsd

Support for NFS 4.1 is now available.

The parameter NFS4_SERVER_MINOR_VERSION is now available in /etc/nfs/syconfig for setting the supported minor version of NFS 4.

12.6.4.6 Mounting NFS Volumes Locally on the Exporting Server

Mounting NFS volumes locally on the exporting server is not supported on SUSE Linux Enterprise systems, as it is the case on all Enterprise class Linux systems.

12.6.4.7 Loading the mlx4_en Adapter Driver with the Mellanox ConnectX2 Ethernet Adapter

There is a reported problem that the Mellanox ConnectX2 Ethernet adapter does not trigger the automatic load of the mlx4_en adapter driver. If you experience problems with the mlx4_en driver not automatically loading when a Mellanox ConnectX2 interface is available, create the file mlx4.conf in the directory /etc/modprobe.d with the following command:

install mlx4_core /sbin/modprobe --ignore-install mlx4_core \
  && /sbin/modprobe mlx4_en

12.6.4.8 Using the System as a Router

As long as the firewall is active, the option ip_forwarding will be reset by the firewall module. To activate the system as a router, the variable FW_ROUTE has to be set, too. This can be done through yast2 firewall or manually.

12.6.5 Cross Architecture Information

12.6.5.1 Myricom 10-Gigabit Ethernet Driver and Firmware

SUSE Linux Enterprise 11 (x86, x86_64 and IA64) is using the Myri10GE driver from mainline Linux kernel. The driver requires a firmware file to be present, which is not being delivered with SUSE Linux Enterprise 11.

Download the required firmware at http://www.myricom.com.

12.7 AMD64/Intel64 64-Bit (x86_64) and Intel/AMD 32-Bit (x86) Specific Information

12.7.1 System and Vendor Specific Information

12.7.1.1 Current Limitations in a UEFI Secure Boot Context

When booting in Secure Boot mode, the following restrictions apply:

  • bootloader, kernel and kernel modules must be signed

  • kexec and kdump are disabled

  • hibernation (suspend on disk) is disabled

  • access to /dev/kmem and /dev/mem is not possible, even as root user

  • access to IO port is not possible, even as root user. All X11 graphical drivers must use a kernel driver

  • PCI BAR access through sysfs is not possible

  • 'custom_method' in ACPI is not available

  • debugfs for "asus-wmi" module is not available

  • 'acpi_rsdp' parameter doesn't have any effect on kernel

12.7.1.2 Installation on 4KB Sector Drives Not Supported

Legacy installations are not supported on 4KB sector drives that are installed in x86/x86_64 servers. (UEFI installations and the use of the 4KB sector disks as non-boot disks are supported).

12.7.1.3 Insecurity with XEN on Some AMD Processors

This hardware flaw ("AMD Erratum #121") is described in "Revision Guide for AMD Athlon 64 and AMD Opteron Processors" (http://support.amd.com/TechDocs/25759.pdf):

The following 130nm and 90nm (DDR1-only) AMD processors are subject to this erratum:

  • First-generation AMD-Opteron(tm) single and dual core processors in either 939 or 940 packages:

    • AMD Opteron(tm) 100-Series Processors

    • AMD Opteron(tm) 200-Series Processors

    • AMD Opteron(tm) 800-Series Processors

    • AMD Athlon(tm) processors in either 754, 939 or 940 packages

    • AMD Sempron(tm) processor in either 754 or 939 packages

    • AMD Turion(tm) Mobile Technology in 754 package

  • This issue does not affect Intel processors.

(End quoted text.)

As this is a hardware flaw. It is not fixable except by upgrading your hardware to a newer revision, or not allowing untrusted 64-bit guest systems, or accepting that someone stops your machine. The impact of this flaw is that a malicious PV guest user can halt the host system.

The SUSE XEN updates will fix it via disabling the boot of XEN GUEST systems. The HOST will boot, just not start guests. In other words: If the update is installed on the above listed AMD64 hardware, the guests will no longer boot by default.

To reenable booting, the "allow_unsafe" option needs to be added to XEN_APPEND in /etc/sysconfig/bootloader as follows:

XEN_APPEND="allow_unsafe"

12.7.1.4 Boot Device Larger than 2 TiB

Due to limitations in the legacy x86/x86_64 BIOS implementations, booting from devices larger than 2 TiB is technically not possible using legacy partition tables (DOS MBR).

Since SUSE Linux Enterprise Server 11 Service Pack 1 we support installation and boot using uEFI on the x86_64 architecture and certified hardware.

12.7.1.5 i586 and i686 Machines with More than 16 GB of Memory

Depending on the workload, i586 and i686 machines with 16GB-48GB of memory can run into instabilities. Machines with more than 48GB of memory are not supported at all. Lower the memory with the mem= kernel boot option.

In such memory scenarios, we strongly recommend using a x86-64 system with 64-bit SUSE Linux Enterprise Server, and run the (32-bit) x86 applications on it.

12.7.1.6 Directly Addressable Memory on x86 Machines

When running SLES on an x86 machine, the kernel can only address 896MB of memory directly. In some cases, the pressure on this memory zone increases linearly according to hardware resources such as number of CPUs, amount of physical memory, number of LUNs and disks, use of multipath, etc.

To workaround this issue, we recommend running an x86_64 kernel on such large server machines.

12.7.1.7 NetXen 10G Ethernet Expansion Card on IBM BladeCenter HS12 System

When installing SUSE Linux Enterprise Server 11 on a HS12 system with a "NetXen Incorporated BladeCenter-H 10 Gigabit Ethernet High Speed Daughter Card", the boot parameter pcie_aspm=off should be added.

12.7.1.8 NIC Enumeration

Ethernet interfaces on some hardware do not get enumerated in a way that matches the marking on the chassis.

12.7.1.9 Service Pack for HP Linux ProLiant

The hpilo driver is included in SUSE Linux Enterprise Server 11. Therefore, no hp-ilo package will be provided in the Linux ProLiant Service Pack for SUSE Linux Enterprise Server 11.

For more details, see Novell TID 7002735 http://www.novell.com/support/documentLink.do?externalID=7002735.

12.7.1.10 HP High Performance Mouse for iLO Remote Console.

The desktop in SUSE Linux Enterprise Server 11 now recognizes the HP High Performance Mouse for iLO Remote Console and is configured to accept and process events from it. For the desktop mouse and the HP High Performance Mouse to stay synchronized, it is necessary to turn off mouse acceleration. As a result, the HP iLO2 High-Performance mouse (hpmouse) package is no longer needed with SUSE Linux Enterprise Server 11 once one of the following three options are implemented.

  1. In a terminal run xset m 1 — this setting will not survive a reset of the desktop.

  2. (Gnome) In a terminal run gconf-editor and go to desktop->gnome->peripherals->mouse. Edit the "motion acceleration" field to be 1.

    (KDE) Open "Personal Settings (Configure Desktop)" in the menu and go to "Computer Administration->Keyboard&Mouse->Mouse->Advanced" and change "Pointer Acceleration" to 1.

  3. (Gnome) In a terminal run "gnome-mouse-properties" and adjust the "Pointer Speed" slide scale until the HP High Performance Mouse and the desktop mouse run at the same speed across the screen. The recommended adjustment is close to the middle, slightly on the "Slow" side.

After acceleration is turned off, sync the desktop mouse and the ILO mouse by moving to the edges and top of the desktop to line them up in the vertical and horizontal directions. Also if the HP High Performance Mouse is disabled, pressing the <Ctrl> key will stop the desktop mouse and allow easier synching of the two pointers.

For more details, see Novell TID 7002735 http://www.novell.com/support/documentLink.do?externalID=7002735.

12.7.1.11 Missing 32-Bit Compatibility Libraries for libstdc++ and libg++ on 64-Bit Systems (x86_64)

32-bit (x86) compatibility libraries like "libstdc++-libc6.2-2.so.3" have been available on x86_64 in the package "compat-32-bit" with SUSE Linux Enterprise Server 9, SUSE Linux Enterprise Server 10, and are also available on the SUSE Linux Enterprise Desktop 11 medium (compat-32-bit-2009.1.19), but are not included in SUSE Linux Enterprise Server_11.

Background

The respective libraries have been deprecated back in 2001 and shipped in the compatibility package with the release of SUSE Linux Enterprise Server 9 in 2004. The package was still shipped with SUSE Linux Enterprise Server 10 to provide a longer transition period for applications requiring the package.

With the release of SUSE Linux Enterprise Server 11 the compatibility package is no longer supported.

Solution

In an effort to enable a longer transition period for applications still requiring this package, it has been moved to the unsupported "Extras" channel. This channel is visible on every SUSE Linux Enterprise Server 11 system, which has been registered with the Novell Customer Center. It is also mirrored via SMT alongside the supported and maintained SUSE Linux Enterprise Server 11 channels.

Packages in the "Extras" channel are not supported or maintained.

The compatibility package is part of SUSE Linux Enterprise Desktop 11 due to a policy difference with respect to deprecation and deprecated packages as compared to SUSE Linux Enterprise Server 11.

We encourage customers to work with SUSE and SUSE's partners to resolve dependencies on these old libraries.

12.7.1.12 32-Bit Devel-Packages Missing from the Software Development Kit (x86_64)

Example: libpcap0-devel-32-bit package was available in Software Development Kit 10, but is missing from Software Development Kit 11

Background

SUSE supports running 32-bit applications on 64-bit architectures; respective runtime libraries are provided with SUSE Linux Enterprise Server 11 and fully supported. With SUSE Linux Enterprise 10 we also provided 32-bit devel packages on the 64-bit Software Development Kit. Having 32-bit devel packages and 64-bit devel packages installed in parallel may lead to side-effects during the build process. Thus with SUSE Linux Enterprise 11 we started to remove some (but not yet all) of the 32-bit devel packages from the 64-bit Software Development Kit.

Solution

With the development tools provided in the Software Development Kit 11, customers and partners have two options to build 32-bit packages in a 64-bit environment (see below). Beyond that, SUSE's appliance offerings provide powerful environments for software building, packaging and delivery.

  • Use the "build" tool, which creates a chroot environment for building packages.

  • The Software Development Kit contains the software used for the Open Build Service. Here the abstraction is provided by virtualization.

12.7.2 Virtualization

12.7.2.1 XEN: Watchdog Usage

Multiple XEN watchdog instances are not supported. Enabling more than one instance can cause system crashes.

12.7.2.2 Xen: Kernel Dom0 and Raw Hardware Characteristics

Because the kernel dom0 is running virtualized, tools such as irqbalance or lscpu will not reflect the raw hardware characteristics.

12.7.2.3 Amazon EC2 Availability

SUSE Linux Enterprise Server 11 SP2 is available immediately for use on Amazon Web Services EC2. For more information about Amazon EC2 Running SUSE Linux Enterprise Server, please visit http://aws.amazon.com/suse

12.7.2.4 KVM

Since SUSE Linux Enterprise Server 11 SP1, KVM is fully supported on the x86_64 architecture. KVM is designed around hardware virtualization features included in both AMD (AMD-V) and Intel (VT-x) CPUs produced within the past few years, as well as other virtualization features in even more recent PC chipsets and PCI devices. For example, device assignment using IOMMU and SR-IOV.

The following website identifies processors, which support hardware virtualization:

The KVM kernel modules will not load if the basic hardware virtualization features are not present and enabled in the BIOS. If KVM does not start, please check the BIOS settings.

KVM allows for memory overcommit and disk space overcommit. It is up to the user to understand the impact of doing so. Hard errors resulting from exceeding available resources will result in guest failures. CPU overcommit is supported but carries performance implications.

KVM supports a number of storage caching strategies which may be employed when configuring a guest VM. There are important data integrity and performance implications when choosing a caching mode. As an example, cache=writeback is not as safe as cache=none. See the online "SUSE Linux Enterprise Server Virtualization with KVM" documentation for details.

The following guest operating systems are supported:

  • Starting with SLES 11 SP2, Windows guest operating systems are fully supported on the KVM hypervisor, in addition to Xen. For the best experience, we recommend using WHQL-certified virtio drivers, which are part of SLE VMDP.

    SUSE Linux Enterprise Server 11 SP2 and SP3 as fully virtualized. The following virtualization aware drivers are available: kvm-clock, virtio-net, virtio-block, virtio-balloon

  • SUSE Linux Enterprise Server 10 SP3 and SP4 as fully virtualized. The following virtualization aware drivers are available: kvm-clock, virtio-net, virtio-block, virtio-balloon

  • SUSE Linux Enterprise Server 9 SP4 as fully virtualized. For 32-bit kernel, specify clock=pmtmr on the Linux boot line; for 64-bit kernel, specify ignore_lost_ticks on the Linux boot line.

For more information, see /usr/share/doc/packages/kvm/kvm-supported.txt.

12.7.2.5 VMI Kernel (x86, 32-bit only)

VMware, SUSE and the community improved the kernel infrastructure in a way that VMI is no longer necessary. Starting with SUSE Linux Enterprise Server 11 SP1, the separate VMI kernel flavor is obsolete and therefore has been dropped from the media. When upgrading the system, it will be automatically replaced by the PAE kernel flavor. The PAE kernel provides all features, which were included in the separate VMI kernel flavor.

12.7.2.6 CPU Overcommit and Fully Virtualized Guest

Unless the hardware supports Pause Loop Exiting (Intel) or Pause Intercept Filter (AMD) there might be issues with fully virtualized guests with CPU overcommit in place becoming unresponsive or hang under heavy load.

Paravirtualized guests work flawlessly with CPU overcommit under heavy load.

This issue is currently being worked on.

12.7.2.7 IBM System x x3850/x3950 with ATI Radeon 7000/VE Video Cards and Xen Hypervisor

When installing SUSE Linux Enterprise Server 11 on IBM System x x3850/x3950 with ATI Radeon 7000/VE video cards, the boot parameter 'vga=0x317' needs to be added to avoid video corruption during the installation process.

Graphical environment (X11) in Xen is not supported on IBM System x x3850/x3950 with ATI Radeon 7000/VE video cards.

12.7.2.8 Video Mode Selection for Xen Kernels

In a few cases, following the installation of Xen, the hypervisor does not boot into the graphical environment. To work around this issue, modify /boot/grub/menu.lst and replace vga=<number> with vga=mode-<number>. For example, if the setting for your native kernel is vga=0x317, then for Xen you will need to use vga=mode-0x317.

12.7.2.9 Time Synchronization in virtualized Domains with NTP

Paravirtualized (PV) DomUs usually receive the time from the hypervisor. If you want to run "ntp" in PV DomUs, the DomU must be decoupled from the Dom0's time. At runtime, this is done with:

echo 1 > /proc/sys/xen/independent_wallclock

To set this at boot time:

  1. either append "independent_wallclock=1" to kernel cmd line in DomU's grub configuration file

  2. or append "xen.independent_wallclock = 1" to /etc/sysctl.conf in the DomU.

If you encounter time synchronization issues with Paravirtualized Domains, we encourage you to use NTP.

12.7.3 RAS

12.7.3.1 Update to mcelog for current and next generation Intel CPUs

The mcelog tool and subsystem was updated to support the current and upcoming Intel CPU generation. This also includes the Predictive Failure Analysis feature.

12.8 Intel Itanium (ia64) Specific Information

12.8.1 Installation on Systems with Many LUNs (Storage)

While the number of LUNs for a running system is virtually unlimited, we suggest not having more than 64 LUNs online while installing the system, to reduce the time to initialize and scan the devices and thus reduce the time to install the system in general.

12.9 POWER (ppc64) Specific Information

12.9.1 vDSO for getcpu and glibc vDSO functions

Previous implementations of vDSO for getcpu and gettimeofday are costly in terms of processor cycles.

The new functions of vDSO for getcpu and gettimeofday mitigate this issues and allow applications to run with improved performance.

12.9.2 Support for the IBM POWER7+ Accelerated Encryption and Random Number Generation

For more information on making use of the IBM POWER7+ crypto and RNG accelerators, please see: https://www.ibm.com/developerworks/mydeveloperworks/files/form/anonymous/api/library/f57fde24-5f30-4295-91fb-e612c6a7a75a/document/4a8d6ce4-6e1f-4203-b9b9-1d7747cec644/media/power7%2B-accelerated-encryption-for-linux-v3.pdf

12.9.3 POWER7+ Random Number Generator

Support the POWER7+ on-chip Random Number Generator.

12.9.4 Add Per-process Data Stream Control Register (DSCR) Support

The current kernel supports setting system-wide DSCR (Data Stream Control Register) value using sysfs interface (/sys/devices/system/cpu/dscr_default). This system-wide DSCR value will be inherited by new processes until user changes this value again. So users cannot modify and/or retrieve DSCR value for each process separately.

The powerpc-utils package shipped in this release provides the modified ppc64_cpu command. This command allows users to set and read DSCR value per process basis.

12.9.5 Check Sample Instruction Address Register (SIAR) Valid Bit before Saving Contents of SIAR

The POWER7 processor has a register, referred to as Sample Instruction Address Register. This register is loaded with the contents of instruction address when a sample of a performance monitoring event is taken. If an instruction that was executed speculatively is rolled back, the event is also rolled back but the contents of SIAR are not cleared and thus invalid. The kernel has no way of detecting that the contents of SIAR are invalid. This can result in a few profiling samples with incorrect instruction addresses.

The POWER7+ processor adds a new bit, referred to as SIAR-Valid bit and sets this bit to indicate when the contents of the SIAR are valid. The new SLES 11 SP3 kernel checks this bit before saving the contents of the SIAR in a sample. This ensures that the instruction addresses saved in profiling samples are correct.

12.9.6 LightPath Diagnostics Framework for IBM Power

IBM Power systems have Service indicators (LED) that help identify components (Guiding Light) and also to indicate a component in error (Light Path). Currently, Linux only has a couple of commands that cater to LightPath services.

Deliver a LightPath framework that will help customers to identify a hardware component in error on IBM Power Systems

12.9.7 PRRN Event Handling

The latest versions of firmware for IBM Power Systems provide customers the opportunity to have the affinity for the resources on their systems dynamically updated. This procedure occurs via a Platform Resource Reassignment Notification (PRRN) Event.

The updates to the ppc64-diag, powerpc-utils, and librtas packages allow Linux systems to handle these PRRN events and update the affinity for system cpu and memory resources.

12.9.8 Increase Number of Partitions per Core on IBM POWER7+

Enable support for 20 partitions per core on IBM POWER7+

12.9.9 Enable Firmware Assisted Dump for IBM Power Systems

Starting from IBM POWER6 and above the Power firmware now has a capability to preserve the partition memory dump during system crash and boot into a fresh copy of the kernel with fully-reset system. This feature adds support to exploit the dump capture capability provided by Power firmware.

For more information about the configuration of fadump, see http://www.novell.com/support/kb/doc.php?id=7012786 (http://www.novell.com/support/kb/doc.php?id=7012786).

12.9.10 Kernel cpuidle Framework for POWER7

Enable POWER systems to leverage the generic cpuidle framework by taking advantage of advanced heuristics, tunables and features provided by the cpuidle framework. This enables better power management on the systems and helps tune the system and applications accordingly.

12.9.11 Supported Hardware and Systems

All POWER3, POWER4, PPC970 and RS64–based models that were supported by SUSE Linux Enterprise Server 9 are no longer supported.

12.9.12 Using btrfs as /root File System on IBM Power Systems

Configure a minimum of 32MB for the PReP partition when using btrfs as the /root file system.

12.9.13 Loading the Installation Kernel via Network on POWER

With SUSE Linux Enterprise Server 11 the bootfile DVD1/suseboot/inst64 can not be booted directly via network anymore, because its size is larger than 12MB. To load the installation kernel via network, copy the files yaboot.ibm, yaboot.cnf and inst64 from the DVD1/suseboot directory to the TFTP server. Rename the yaboot.cnf file to yaboot.conf. yaboot can also load config files for specific Ethernet MAC addresses. Use a name like yaboot.conf-01-23-45-ab-cd-ef to match a MAC address. An example yaboot.conf for TFTP booting looks like this:

default=sles11
timeout=100
image[64-bit]=inst64
    label=sles11
    append="quiet install=nfs://hostname/exported/sles11dir"

12.9.14 Huge Page Memory Support on POWER

Huge Page Memory (16GB pages, enabled via HMC) is supported by the Linux kernel, but special kernel parameters must be used to enable this support. Boot with the parameters "hugepagesz=16G hugepages=N" in order to use the 16GB huge pages, where N is the number of 16GB pages assigned to the partition via the HMC. The number of 16GB huge pages available can not be changed once the partition is booted. Also, there are some restrictions if huge pages are assigned to a partition in combination with eHEA / eHCA adapters:

IBM eHEA Ethernet Adapter:

The eHEA module will fail to initialize any eHEA ports if huge pages are assigned to the partition and Huge Page kernel parameters are missing. Thus, no huge pages should be assigned to the partition during a network installation. To support huge pages after installation, the huge page kernel parameters need to be added to the boot loader configuration before huge pages are assigned to the partition.

IBM eHCA InfiniBand Adapter:

The current eHCA device driver is not compatible with huge pages. If huge pages are assigned to a partition, the device driver will fail to initialize any eHCA adapters assigned to the partition.

12.9.15 Installation on POWER onto IBM VSCSI Target

The installation on a vscsi client will fail with old versions of the AIX VIO server.

Solution: Upgrade the AIX VIO server to version 1.5.2.1-FP-11.1 or later.

12.9.16 IBM Linux VSCSI Server Support in SUSE Linux Enterprise Server 11

Customers using SLES 9 or SLES 10 to serve Virtual SCSI to other LPARs, using the ibmvscsis driver, who wish to migrate from these releases, should consider migrating to the IBM Virtual I/O server. The IBM Virtual I/O server supports all the IBM PowerVM virtual I/O features and also provides integration with the Virtual I/O management capabilities of the HMC. It can be downloaded from: http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html

12.9.17 Virtual Fibre Channel Devices

When using IBM Power Virtual Fibre Channel devices utilizing N-Port ID Virtualization, the Virtual I/O Server may need to be updated in order to function correctly. Linux requires VIOS 2.1, Fixpack 20.1, and the LinuxNPIV I-Fix for this feature to work properly. These updates can be downloaded from: http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html

12.9.18 Virtual Tape Devices

When using virtual tape devices served by an AIX VIO server, the Virtual I/O Server may need to be updated in order to function correctly. The latest updates can be downloaded from: http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html

For more information about IBM Virtual I/O Server, see http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html.

12.9.19 Chelsio cxgb3 iSCSI Offload Engine

The Chelsio hardware supports ~16K packet size (the exact value depends on the system configuration). It is recommended that you set the parameter MaxRecvDataSegmentLength in /etc/iscsid.conf to 8192.

For the cxgb3i driver to work properly, this parameter needs to be set to 8192.

In order to use the cxgb3i offload engine, the cxgb3i module needs to be loaded manually after open-scsi has been started.

For additional information, refer to /usr/src/linux/Documentation/scsi/cxgb3i.txt in the kernel source tree.

12.9.20 Known TFTP Issues with Yaboot

When attempting to netboot yaboot, users may see the following error message:

Can't claim memory for TFTP download (01800000 @ 01800000-04200000)

and the netboot will stop and immediately display the yaboot "boot:" prompt. Use the following steps to work around the problem.

  • Reboot the system and at the IBM splash screen select '8' to get to an Open Firmware prompt "0>"

  • At the Open Firmware prompt, type the following commands:

    setenv load-base 4000
    setenv real-base c00000
    dev /packages/gui obe
    
  • The second command will take the system back to the IBM splash screen and the netboot can be attempted again.

12.9.21 Graphical Administration of Remotely Installed Hardware

If you do a remote installation in text mode, but want to connect to the machine later in graphical mode, be sure to set the default runlevel to 5 via YaST. Otherwise xdm/kdm/gdm might not be started.

12.9.22 InfiniBand - SDP Protocol Not Supported on IBM Hardware

To disable SDP on IBM hardware set SDP=no in openib.conf so that by default SDP is not loaded. After you have set this setting in openib.conf to 'no' run openibd restart or reboot the system for this setting to take effect.

12.9.23 RDMA NFS Server May Hang During Shutdown (OFED)

If your system is configured as an NFS over RDMA server, the system may hang during a shutdown if a remote system has an active NFS over RDMA mount. To avoid this problem, prior to shutting down the system, run "openibd stop"; run it in the background, because the command will hang and otherwise block the console:

/etc/init.d/openibd stop &

A shutdown can now be run cleanly.

The steps to configure and start NFS over RDMA are as follows:

  • On the server system:

    1. Add an entry to the file /etc/exports, for example:

      /home   192.168.0.34/255.255.255.0(fsid=0,rw,async,insecure,no_root_squash)
    2. As the root user run the commands:

      /etc/init.d/nfsserver start
      echo rdma 20049 > /proc/fs/nfsd/portlist
  • On the client system:

    1. Run the command: modprobe xprtrdma.

    2. Mount the remote file system using the command /sbin/mount.nfs. Specify the ip address of the ip-over-ib network interface (ib0, ib1...) of the server and the options: proto=rdma,port=20049, for example:

      /sbin/mount.nfs 192.168.0.64:/home /mnt \
      -o proto=rdma,port=20049,nolock

12.9.24 XFS Stack Overflow

Under heavy IO load on a fragmented filesystem, XFS can overflow the stack on ppc64 architecture leading to system crash.

This problem is fixed with the first SLE 11 SP3 maintenance update. The released kernel version is 3.0.82-0.7.9

12.10 System z (s390x) Specific Information

Look at http://www.ibm.com/developerworks/linux/linux390/documentation_novell_suse.html for more information.

IBM zEnterprise 196 (z196) and IBM zEnterprise 114 (z114) further on referred to as z196 and z114.

12.10.1 Supported Memory Size for SLES on System z

With SLES 11 SP3 we support up to 4 TiB of memory (main memory). This however may be reduced by limitations of the underlying hardware.

For more information, see https://www.suse.com/products/server/technical-information/>/a>. (https://www.suse.com/products/server/technical-information/)

12.10.2 Hardware

12.10.2.1 Support of SHA-256 Hash Algorithm in openCryptoki ICA Token

The openCryptoki 2.4.3.1 IBM Cryptographic Architecture (ICA) token now supports RSA with SHA-2 hashes with the new mechanisms CKM_SHA256_RSA_PKCS, CKM_SHA384_RSA_PKCS, and CKM_SHA512_RSA_PKCS.

12.10.2.2 Leverage Cross Memory Attach Functionality for System z

Cross memory attach reduces the number of data copies needed for intra-node interprocess communication. In particular, MPI libraries engaged in intra-node communication can now perform a single copy of the message to shared memory rather than performing a double copy.

12.10.2.3 CryptoExpress4 - Device Driver Exploitation

With SLES 11 SP3 the z90crypt device driver supports the Crypto Express 4 (CEX4) adapter card.

12.10.2.4 Implement lscpu and chcpu

This feature improves handling of CPU hotplug. The lscpu command now displays detailed information about available CPUs. Using a new command, chcpu, you can change the CPU state, disable and enable CPUs, and configure specified CPUs.

12.10.2.5 CPACF Exploitation (libica Part 2)

This feature extends the libica library with new modes of operation for DES, 3DES and AES. These modes of operation (CBC-CS, CCM, GCM, CMAC) are supported by Message Security Assist (CPACF) extension 4, which can be used with z196 and later System z mainframes.

12.10.2.6 Exploitation of Data Routing for FCP

This feature supports the enhanced mode of the System z FCP adapter card. In this mode, the adapter passes data directly from memory to the SAN when there is no free memory on the adapter card because of large or slow I/O requests.

12.10.3 Virtualization

12.10.3.1 VEPA Mode Support

VEPA mode routes traffic between virtual machines on the same mainframe through an external switch. The switch then becomes a single point of control for security, filtering, and management.

12.10.3.2 Technology preview: KVM support on s390x

KVM is now included on the s390x platform as a technology preview.

12.10.3.3 Support of Live Guest Relocation (LGR) with z/VM 6.2

Live guest relocation (LGR) with z/VM 6.2 requires z/VM service applied, especially with Collaborative Memory Management (CMMA) active (cmma=on).

Apply z/VM APAR VM65134.

12.10.3.4 Linux Guests Running on z/VM 5.4 and 6.1 Require z/VM Service Applied

Linux guests using dedicated devices may experience a loop, if an available path to the device goes offline prior to the IPL of Linux.

Apply recommended z/VM service APARs VM65017 and VM64847.

12.10.4 Storage

12.10.4.1 Safe Offline Interface for DASD Devices

Instead of setting a DASD device offline and returning all outstanding I/O requests as failed, with this interface you can set a DASD device offline and write all outstanding data to the device before setting the device offline.

12.10.4.2 Flash Express Support for IBM System z

Flash Express memory is accessed as storage-class memory increments. Storage-class memory for IBM System z is a class of data storage devices that combine properties of both storage and memory. This feature improves the paging rate and access performance for temporary storage, for example, for data warehousing.

12.10.4.3 Detect DASD Path Connection Error

This feature enables the Linux DASD device driver to detect path configuration errors that cannot be detected by hardware or microcode. The device driver then does not use such paths. For example, with this feature, the DASD device driver detects paths that are assigned to a specific subchannel, but lead to different storage servers.

12.10.4.4 SAN Utilities for zFCP, hbaapi Completion

Improves systems manageability by supporting pass-through for generic services and retrieving events in the SAN. Improves SAN setup by retrieving information about the SAN fabric including all involved interconnect elements, such as switches.

12.10.4.5 Enhanced DASD Statistics for PAV and HPF

This feature improves DASD I/O diagnosis, especially for Parallel Access Volume (PAV) and High Performance FICON (HPF) environments, to analyze and tune DASD performance.

12.10.4.6 New Partition Types Added to the fdasd Command

In SLES11 SP2 new partition types were added to the fdasd command in the s390-tools package. Anyone using YaST in SP3 to create partitions will not see this happening. If fdasd is used from the command line, it will work as documented and desired.

12.10.5 Network

12.10.5.1 YaST May Fail to Activate Hipersocket Devices in Layer 2 Mode

In rare occasions Hipersocket devices in layer 2 mode may remain in softsetup state when configured via YaST.

Perform ifup manually.

12.10.5.2 YaST Sets an Invalid Default MAC Address for OSA Devices in Layer 2 Mode

OSA devices in layer 2 mode remain in softsetup state when "Set default MAC address" is used in YaST.

Do not select "Set default MAC address" in YaST. If default MAC address got selected in YaST remove the line LLADR='00:00:00:00:00:00' from the ifcfg file in /etc/sysconfig/network.

12.10.5.3 Limitations with the "qetharp" Utility

qetharp -d

Deleting: An ARP entry, which is part of Shared OSA should not get deleted from the arp cache.

Current Behavior: An ARP entry, which is part of shared OSA is getting deleted from the arp cache.

qetharp -p

Purging: It should remove all the remote entries, which are not part of shared OSA.

Current Behavior: It is only flushing out the remote entries, which are not part of shared OSA for first time. Then, if the user pings any of the purged ip address, the entry gets added back to the arp cache. Later, if the user runs purge for a second time, that particular entry is not getting removed from the arp cache.

12.10.6 Security

12.10.6.1 Support of SHA-256 Hash Algorithm in openCryptoki ICA Token

The openCryptoki 2.4.3.1 IBM Cryptographic Architecture (ICA) token now supports RSA with SHA-2 hashes with the new mechanisms CKM_SHA256_RSA_PKCS, CKM_SHA384_RSA_PKCS, and CKM_SHA512_RSA_PKCS.

12.10.6.2 CryptoExpress4 - Device Driver Exploitation

With SLES 11 SP3 the z90crypt device driver supports the Crypto Express 4 (CEX4) adapter card.

12.10.6.3 CPACF Exploitation (libica Part 2)

This feature extends the libica library with new modes of operation for DES, 3DES and AES. These modes of operation (CBC-CS, CCM, GCM, CMAC) are supported by Message Security Assist (CPACF) extension 4, which can be used with z196 and later System z mainframes.

12.10.6.4 Existing Data Execution Protection Removed for System z

The existing data execution protection for Linux on System z relies on the System z hardware to distinguish instructions and data through the secondary memory space mode. As of System z10, new load-relative-long instructions do not make this distinction. As a consequence, applications that have been compiled for System z10 or later fail when running with the existing data execution protection.

Therefore, data execution protection for Linux on System z has been removed.

12.10.7 RAS

12.10.7.1 Crypto Adapter Resiliency

This feature provides System z typical RAS for cryptographic adapters through comprehensive failure recovery. For example, this feature handles unexpected failures or changes caused by Linux guest relocation, suspend and resume activities or configuration changes.

12.10.7.2 Fuzzy Live Dump for System z

With this feature kernel dumps from running Linux systems can be created, to allow problem analysis without taking down systems. Because the Linux system continues running while the dump is written, and kernel data structures are changing during the dump process, the resulting dump contains inconsistencies.

12.10.7.3 kdump Support for System z

kdump can be used to create system dumps for instances of SUSE Linux Enterprise Server. kdump reduces both dump time and dump size and facilitates dump disk storage sharing. A setup GUI is provided by YaST. When performing an upgrade to SLES 11 SP3 and enabling kdump, please note that kdump reserves approximately 128 MB by default and sufficient disk space must be available for storing the dump.

Depending on the number of devices that are used in the system the memory reserved for kdump needs to be adjusted. If less than fourty devices are configured for the respective system, no action is required. If more than fourty devices are configured, please add one megabyte of system main storage for each additional 25 devices.

If too many devices are used in the system the setup of kdump may fail, because too many devices are written to kernel command line. This line must not exceed 896 characters. One way to shorten the line is to specify ranges of devicenumbers instead of listing each device individually (!0800,!0801,!0802 becomes !0800-0802).

This shortened device number list needs to be added to the kdump command line. To configure kdump go to the Expert Settings and insert the shortened devicelist into the field Kdump Command Line Append.

12.10.7.4 Distinguish Dump System and Boot System

A dump system is not necessarily identical to the system that was booted. Linux guest relocation or suspend and resume activities might introduce problems. To help analyze such problems, a system dump now provides location information about the original Linux system.

12.10.7.5 Support for zPXE Boot

zPXE provide a similar function to the PXE boot on x86/x86-64: have a parameter driven executable, retrieving installation source and instance specific parameters from specified network location, download automatically the respective kernel, initrd, and parameter files for that instance and start an automated (or manual) installation.

12.10.8 Performance

12.10.8.1 Leverage Cross Memory Attach Functionality for System z

Cross memory attach reduces the number of data copies needed for intra-node interprocess communication. In particular, MPI libraries engaged in intra-node communication can now perform a single copy of the message to shared memory rather than performing a double copy.

12.10.8.2 Support of the Transactional Execution Facility and Runtime Instrumentation

With the facility the Linux kernel supports hardware runtime instrumentation, an advanced mechanism that improves analysis of and optimization of the code generated by the new IBM JVM. Software locking overhead is minimized and scalability and parallelism increased.

12.10.8.3 System z Performance Counters in the Linux perf Tool

This feature provides simplified performance analysis for software on Linux on System z. It uses the perf tool to access the hardware performance counters.

12.10.8.4 Optimized Compression Library zlib

This feature provides optimization of and support for the general purpose data compression library zlib. This library improves compression performance on System z.

12.10.8.5 Libhugetlbfs support for System z

Enables the transparent exploitation of large pages in C/C++ programs. Applications and middleware programs can profit from the performance benefits of large pages without changes or recompilation.

12.10.9 Miscellaneous

12.10.9.1 IBM System z Architecture Level Set (ALS) Preparation

To exploit new IBM System z architecture capabilities during the lifecycle of SUSE Linux Enterprise Server 11, support for machines of the types z900, z990, z800, z890 is deprecated in this release. SUSE plans to introduce an ALS earliest with SUSE Linux Enterprise Server 11 Service Pack 1 (SP1), latest with SP2. After ALS, SUSE Linux Enterprise Server 11 only executes on z9 or newer processors.

With SUSE Linux Enterprise Server 11 GA, only machines of type z9 or newer are supported.

When developing software, we recommend to switch gcc to z9/z10 optimization:

  • install gcc

  • install gcc-z9 package (change gcc options to -march=z9-109 -mtune=z10)

12.10.9.2 Minimum Storage Firmware Level for LUN Scanning

For LUN Scanning to work properly, the minimum storage firmware level should be:

  • DS8000 Code Bundle Level 64.0.175.0

  • DS6000 Code Bundle Level 6.2.2.108

12.10.9.3 Large Page Support in IBM System z

Large Page support allows processes to allocate process memory in chunks of 1 MiB instead of 4 KiB. This works through the hugetlbfs.

12.10.9.4 Collaborative Memory Management Stage II (CMM2) Lite

SLES 11 SP2 supports CMM2 Lite for optimized memory usage and to handle memory overcommitment via memory page state transitions based on "stable" and "unused" memory pages of z/VM guests using the existing arch_alloc_page and arch_free_page callbacks.

12.10.9.5 Issue with SLES 11 and NSS under z/VM

Starting SLES 11 under z/VM with NSS sometimes causes a guest to logoff by itself.

Solution: IBM addresses this issue with APAR VM64578.

13 Resolved Issues

  • Bugfixes

    This Service Pack contains all the latest bugfixes for each package released via the maintenance Web since the GA version.

  • Security Fixes

    This Service Pack contains all the latest security fixes for each package released via the maintenance Web since the GA version.

  • Program Temporary Fixes

    This Service Pack contains all the PTFs (Program Temporary Fix) for each package released via the maintenance Web since the GA version which were suitable for integration into the maintained common codebase.

14 Technical Information

This section contains information about system limits, a number of technical changes and enhancements for the experienced user.

When talking about CPUs we are following this terminology:

CPU Socket

The visible physical entity, as it is typically mounted to a motherboard or an equivalent.

CPU Core

The (usually not visible) physical entity as reported by the CPU vendor.

On System z this is equivalent to an IFL.

Logical CPU

This is what the Linux Kernel recognizes as a "CPU".

We avoid the word "thread" (which is sometimes used), as the word "thread" would also become ambiguous subsequently.

Virtual CPU

A logical CPU as seen from within a Virtual Machine.

14.1 Kernel Limits

http://www.suse.com/products/server/technical-information/#Kernel

This table summarizes the various limits which exist in our recent kernels and utilities (if related) for SUSE Linux Enterprise Server 11.

SLES 11 (3.0) x86 ia64 x86_64 s390x ppc64

CPU bits

32

64

64

64

64

max. # Logical CPUs

32

4096

4096

64

1024

max. RAM (theoretical / certified)

64/16 GiB

1 PiB/8+ TiB

64 TiB/16 TiB

4 TiB/256 GiB

1 PiB/512 GiB

max. user-/kernelspace

3/1 GiB

2 EiB/φ

128 TiB/128 TiB

φ/φ

2 TiB/2 EiB

max. swap space

up to 29 * 64 GB (i386 and x86_64) or 30 * 64 GB (other architectures)

max. # processes

1048576

max. # threads per process

tested with more than 120000; maximum limit depends on memory and other parameters

max. size per block device

up to 16 TiB

and up to 8 EiB on all 64-bit architectures

FD_SETSIZE

1024

14.2 KVM Limits

Guest RAM size

2 TB

Virtual CPUs per guest

160

Maximum number of NICs per guest

8

Block devices per guest

4 emulated, 20 para-virtual

Maximum number of guests

Limit is defined as the total number of vCPUs in all guests being no greater than eight times the number of CPU cores in the host.

14.2.1 TLS Support for QEMU Websockets

Since SLE 11 SP3 we ship QEMU with TLS encryption support for QEMU Websockets. This feature allows every modern browser to create a secure VNC connection to QEMU without any additional plugins or configuration on the user side.

14.2.2 QEMU: Version 1.4

SLES 11 SP3 will be shipped with QEMU version 1.4.1. More information about this version is available at: http://wiki.qemu.org/ChangeLog/1.4 For a list of supported features in SLES 11 SP3, refer to the /usr/share/doc/kvm/kvm-supported.txt file in the "KVM" package.

14.2.3 XEN/KVM: virt-manager Can Configure PCI Pass-through Devices at VM Creation

Virt-Manager is now capable to allow the configuration of PCI pass-through devices at VM creation in Xen and KVM.

14.2.4 libseccomp

Seccomp filters are expressed as a Berkeley Packet Filter (BPF) program, which is not a well understood interface for most developers.

The libseccomp library provides an easy to use interface to the Linux Kernel's syscall filtering mechanism, seccomp. The libseccomp API allows an application to specify which syscalls, and optionally which syscall arguments, the application is allowed to execute, all of which are enforced by the Linux Kernel.

14.2.5 libvirt Support for QEMU seccomp Sandboxing

QEMU guests spawned by libvirt are exposed to a large number of system calls that go unused for the entire lifetime of the process.

libvirt's qemu.conf file is updated with a seccomp_sandbox option that can be used to enable use of QEMU's seccomp sandboxing support. This allows execution of QEMU guests with reduced exposure to kernel system calls.

14.2.6 libvirt Bridged Networking for Unprivileged Users

libvirt can already spawn QEMU guests with bridged networking support when running under a privileged user ID, however it cannot do the same when run under an unprivileged user ID.

libvirt is updated to enable QEMU guests to be spawned with bridged networking when libvirt is run under an unprivileged user ID. This benefits installations that connect to the libvirtd instance with the qemu:///session URI. This was achieved by using the new QEMU network helper support when libvirt is running under an unprivileged user ID.

14.2.7 libvirt DAC Isolation

libvirt spawns all QEMU guests created through the qemu:///system URI under the user ID and group ID defined in /etc/libvirt/qemu.conf. This means all guests are run under the same user ID and group ID, removing all Discretionary Access Control (DAC). While Mandatory Access Control (MAC) may already be isolating guests, it would be nice to also have DAC isolation for an added layer of security.

libvirt has been updated to allow spawning of guests under unique user and group IDs. The libvirt domain XML's <seclabel> tag is updated with model='dac' to provide this support, and libvirt APIs are updated to allow applications to inspect the full list of security labels of a domain.

14.2.8 QEMU Network Helper for Unprivileged Users

QEMU guests previously could not be started with bridged networking support when run under an unprivileged user ID.

Infrastructure is introduced to enable a network helper to be executed by QEMU. This also allows third parties to implement user-visible network backends without having to introduce them into QEMU itself. A default network helper is introduced that implements the same bridged networking functionality as the common qemu-ifup script. It creates a tap file descriptor, attaches it to a bridge, and passes it back to QEMU. This helper runs with higher privileges, allowing QEMU to be invoked with bridged networking support under an unprivileged user.

14.2.9 QEMU: Sandboxing with seccomp

New seccomp kernel functionality is intended to be used to declare the whitelisted syscalls and syscall parameters. This will limit QEMU's syscall footprint, and therefore the potential kernel attack surface. The idea is that if an attacker were to execute arbitrary code, they would only be able to use the whitelisted syscalls.

QEMU has been updated with the '-sandbox' option. When set to 'on', the '-sandbox' option will enable seccomp system call filtering for QEMU, allowing only a subset of system calls to be used.

14.2.10 KVM: Export Platform Power Management Capability through libvirt Framework

Libvirt can now discover and update tags in the capabilities XML field based on power management features supported by the platform.

14.2.11 KVM: Support INVPCID's Haswell Instructions

KVM now support the new Intel Haswell microarchitecture instructions: INVPCID. Process-context identifiers (PCIDs) are a facility by which a logical processor may cache information for multiple linear-address spaces so that the processor may retain cached information when software switches to a different linear address space. INVPCID instruction is used for fine-grained TLB flush which is benefit for kernel. This features is now exposed to the guest. Modern guest can use this new instructiosn to improve the efficiency of KVM. qemu-kvm is required to selects PCID via "-cpu" option.

14.2.12 KVM: TSC Deadline Timer Support

TSC deadline timer is a new mode in LAPIC timer, which will generate one-shot timer interrupt based on TSC deadline, in place of current APIC clock count interval. It will provide more precise timer interrupt (less than 1 ticks) to benefit OS scheduler etc.

14.2.13 KVM: Support for APIC Virtualization

This Service Pack adds support for APIC Virtualization provided by more current Intel CPUs. This improves VMM interrupt handling efficiency.

14.2.14 KVM: Haswell New Instructions Support

KVM now support the Intel Haswell microarchitecture new instructions (ie: FP fused Multiply Add, 256-bit Integer vectors, MOVBE support...). Using some of these new instructions will improve the efficiency of KVM.

14.2.15 KVM: support for Supervisor Mode Execution Protection (SMEP)

KVM now support tje Supervisor mode execution protection (SMEP) wich prevents execution of user mode pages while in supervisor mode and addresses class of exploits for hijacking kernel execution.

14.2.16 XEN/KVM/libvirt: Virtual Machine Lock Manager

The virtual machine lock manager is a daemon which will ensure that a virtual machine's disk image cannot be written to by two QEMU/KVM processes at the same time. It provides protection against starting the same virtual machine twice, or adding the same disk to two different virtual machines.

14.3 Xen Limits

SLES 11 SP3 x86

CPU bits

64

Logical CPUs (Xen Hypervisor)

256

Virtual CPUs per VM

64

Maximum supported memory (Xen Hypervisor)

2 TB

Maximum supported memory (Dom0)

500 GiB

Virtual memory per VM

16 GB (32-bit), 511 GB (64-bit)

Total virtual devices per host

2048

Maximum number of NICs per host

8

Maximum number of vNICs per guest

8

Maximum number of guests per host

64

In Xen 4.2, the hypervisor bundled with SUSE Linux Enterprise Server 11 SP3, dom0 is able to see and handle a maximum of 512 logical CPUs. The hypervisor itself, however, can access up to logical 256 logical CPUs and schedule those for the VMs.

With SUSE Linux Enterprise Server 11 SP2, we removed the 32-bit hypervisor as a virtualization host. 32-bit virtual guests are not affected and are fully supported with the provided 64-bit hypervisor.

14.3.1 XEN: Secure Boot

Xen hypervisor is shipped as an EFI application, and signed. It will negotiate with the shim loader to validate the Dom0 kernel signature before booting it. Enabling the alternative kernel image format takes as a prerequisite the bumping of the backward compatibility level from 3.2 to 4.X, so we are not able to boot a SLE 11 SP3 PV guest on SLE 10 SP4, even if secure boot is not enabled.

14.3.2 AMD iommu: enable ats devices

XEN now support the AMD iommu: enable ats (address transaltion services) devices.

14.3.3 Add support for AMD's OSVW feature in guests

This feature enables AMD OSVW (OS Visible Workaround) feature for Xen and KVM. New AMD errata will have a OSVW id assigned in the future. OS is supposed to check OSVW status MSR to find out whether CPU has a specific erratum.

14.3.4 Do not intercept RDTSC(P) when TSC scaling is supported by hardware

Orochi-C AMD CPUs now supports TSC scaling. This feature enables TSC scaling ratio for SVM. Guest VMs don't need take #VMEXIT to calculate a translated TSC value when it is running under TSC emulation mode. This can substancially reduce the rdtsc overhead.

14.3.5 XEN/KVM: virt-manager Can Configure PCI Pass-through Devices at VM Creation

Virt-Manager is now capable to allow the configuration of PCI pass-through devices at VM creation in Xen and KVM.

14.3.6 XEN: Netconsole Support to Netfront Device

XEN now support netconsole on its netfront device.

14.3.7 XEN: TSC Deadline Timer Support

TSC deadline timer is a new mode in LAPIC timer, which will generate one-shot timer interrupt based on TSC deadline, in place of current APIC clock count interval. It will provide more precise timer interrupt (less than 1 ticks) to benefit OS scheduler etc.

14.3.8 XEN: JKT Core Error Recovery

Xen now support the new MCA type to handle errors in the core (like L1/L2 cache error). Previously only uncore errors (like L3 cache error) was handled.

14.3.9 XEN: Haswell New Instructions Support

XEN now support the Intel Haswell microarchitecture new instructions (ie: FP fused Multiply Add, 256-bit Integer vectors, MOVBE support...). Using some of these new instructions will improve the efficiency of XEN.

14.3.10 APIC Virtualization in Xen and KVM

This Service Pack adds support for the APIC virtualization feature for Intel's IvyBridge and later CPUs. Both hypervisors - Xen and KVM - support APICv.

14.3.11 XEN: Large VT-d Pages

This is an IOMMU performance enhancement to reduce IOMMU page table and IOTLB footprint.

14.3.12 XEN/KVM/libvirt: Virtual Machine Lock Manager

The virtual machine lock manager is a daemon which will ensure that a virtual machine's disk image cannot be written to by two QEMU/KVM processes at the same time. It provides protection against starting the same virtual machine twice, or adding the same disk to two different virtual machines.

14.3.13 XEN: Bios Information to XEN HVM Guest

Bios information of the physical server can now be passed to XEN HVM guest system.

14.3.14 XEN: Support for PCI Pass-through Bind and Unbind in libvirt Xen Driver

virt-manager is now able to set up PCI pass-through for Xen without having to switch to the command line to free the PCI device before assigning it to the VM.

14.3.15 XEN: xenstore-chmod Command Now Support 256 Permissions

To be able to manage permission using the xenstore-chmod command on more than 16 domUs at the same time, xenstore-chmod command now support 256 permissions.

14.3.16 XEN VNC implementation to correctly map keyboard layouts

XEN VNC implementation has now the support for the ExtendedKeyEvent client message. This message allows a client to send raw scan codes directly to the server. If the client and server are using the same keymap, then it's unnecessary to use the '-k' option with QEMU when this extension is supported. This is extension is currently only implemented by gtk-vnc based clients (gvncviewer, virt-manager, vinagre, etc.).

14.4 File Systems

https://www.suse.com/products/server/technical-information/#FileSystem

SUSE Linux Enterprise was the first enterprise Linux distribution to support journaling file systems and logical volume managers back in 2000. Today, we have customers running XFS and ReiserFS with more than 8TiB in one file system, and our own SUSE Linux Enterprise engineering team is using all 3 major Linux journaling file systems for all its servers.

We are excited to add the OCFS2 cluster file system to the range of supported file systems in SUSE Linux Enterprise.

We propose to use XFS for large-scale file systems, on systems with heavy load and multiple parallel read- and write-operations (e.g., for file serving with Samba, NFS, etc.). XFS has been developed for such conditions, while typical desktop use (single write or read) will not necessarily benefit from its capabilities.

Due to technical limitations (of the bootloader), we do not support XFS to be used for /boot.

Feature Ext 3 Reiserfs 3.6 XFS Btrfs * OCFS 2 **

Data/Metadata Journaling

•/•

○/•

○/•

n/a *

○/•

Journal internal/external

•/•

•/•

•/•

n/a *

•/○

Offline extend/shrink

•/•

•/•

○/○

○/○

•/○

Online extend/shrink

•/○

•/○

•/○

•/•

•/○

Sparse Files

Tail Packing

Defrag

Extended Attributes/ Access Control Lists

•/•

•/•

•/•

•/•

•/•

Quotas

^

Dump/Restore

Blocksize default

4 KiB

4 KiB

4 KiB

4/64 KiB

4 KiB

max. File System Size

16 TiB

16 TiB

8 EiB

16 EiB

16 TiB

max. Filesize

2 TiB

1 EiB

8 EiB

16 EiB

1 EiB

 

* Btrfs is supported in SUSE Linux Enterprise Server 11 Service Pack3; Btrfs is a copy-on-write logging-style file system. Rather than journaling changes before writing them in-place, it writes them to a new location, then links it in. Until the last write, the new changes are not "committed". Due to the nature of the filesystem, quotas will be implemented based on subvolumes in a future release. The blocksize default varies with different host architectures. 64KiB is used on ppc64 and IA64, 4KiB on most other systems. The actual size used can be checked with the command "getconf PAGE_SIZE".

 

** OCFS2 is fully supported as part of the SUSE Linux Enterprise High Availability Extension.

The maximum file size above can be larger than the file system's actual size due to usage of sparse blocks. Note that unless a file system comes with large file support (LFS), the maximum file size on a 32-bit system is 2 GB (2^31 bytes). Currently all of our standard file systems (including ext3 and ReiserFS) have LFS, which gives a maximum file size of 2^63 bytes in theory. The numbers in the above tables assume that the file systems are using 4 KiB block size. When using different block sizes, the results are different, but 4 KiB reflects the most common standard.

In this document: 1024 Bytes = 1 KiB; 1024 KiB = 1 MiB; 1024 MiB = 1 GiB; 1024 GiB = 1 TiB; 1024 TiB = 1 PiB; 1024 PiB = 1 EiB. See also http://physics.nist.gov/cuu/Units/binary.html.

NFSv4 with IPv6 is only supported for the client side. A NFSv4 server with IPv6 is not supported.

This version of Samba delivers integration with Windows 7 Active Directory Domains. In addition we provide the clustered version of Samba as part of SUSE Linux Enterprise High Availability 11 SP3.

14.4.1 XFS Realtime Volumes

XFS Realtime Volumes is an experimental feature, available for testing and experimenting. If you encounter any issues, SUSE is interested in feedback. Please, submit a support request through the usual access methods.

14.4.2 ext4: Runtime Switch for Write Support

The SUSE Linux Enterprise 11 kernel contains a fully supported ext4 file system module, which provides read-only access to the file system. A separate package is not required.

Read-write access to an ext4 file system can be enabled by using the rw=1 module parameter. The parameter can be passed while loading the ext4 module manually, by adding it for automatic use by creating /etc/modprobe.d/ext4 with the contents options ext4 rw=1, or after loading the module by writing 1 to /sys/module/ext4/parameters/rw. Note that read-write ext4 file systems are still officially unsupported by SUSE Technical Services.

ext4 is not supported for the installation of the SUSE Linux Enterprise operating system.

Since SLE 11 SP2 we support offline migration from ext4 to the supported btrfs file system.

The ext4-writeable package is still available for compatibility with systems with kernels from both the SLE 11 SP2 and SLE 11 SP3 releases installed.

14.5 Kernel Modules

An important requirement for every Enterprise operating system is the level of support a customer receives for his environment. Kernel modules are the most relevant connector between hardware ("controllers") and the operating system. Every kernel module in SUSE Linux Enterprise Server 11 has a flag 'supported' with three possible values: "yes", "external", "" (empty, not set, "unsupported").

The following rules apply:

  • All modules of a self-recompiled kernel are by default marked as unsupported.

  • Kernel Modules supported by SUSE partners and delivered using SUSE's Partner Linux Driver process are marked "external".

  • If the "supported" flag is not set, loading this module will taint the kernel. Tainted kernels are not supported. To avoid this, not supported Kernel modules are included in an extra RPM (kernel-<flavor>-extra) and will not be loaded by default ("flavor"=default|smp|xen|...). In addition, these unsupported modules are not available in the installer, and the package kernel-$flavor-extra is not on the SUSE Linux Enterprise Server media.

  • Kernel Modules not provided under a license compatible to the license of the Linux kernel will also taint the kernel; see /usr/src/linux/Documentation/sysctl/kernel.txt and the state of /proc/sys/kernel/tainted.

Technical Background

  • Linux Kernel

    The value of /proc/sys/kernel/unsupported defaults to 2 on SUSE Linux Enterprise Server 11 ("do not warn in syslog when loading unsupported modules"). This is the default used in the installer as well as in the installed system. See /usr/src/linux/Documentation/sysctl/kernel.txt for more information.

  • modprobe

    The modprobe utility for checking module dependencies and loading modules appropriately checks for the value of the "supported" flag. If the value is "yes" or "external" the module will be loaded, otherwise it will not. See below, for information on how to override this behavior.

    Note: SUSE does not generally support removing of storage modules via modprobe -r.

Working with Unsupported Modules

While the general supportability is important, there might occur situations where loading an unsupported module is required (e.g., for testing or debugging purposes, or if your hardware vendor provides a hotfix):

  • You can override the default by changing the variable allow_unsupported_modules in /etc/modprobe.d/unsupported-modules and set the value to "1".

    If you only want to try loading a module once, the --allow-unsupported-modules command-line switch can be used with modprobe. (For more information, see man modprobe).

  • During installation, unsupported modules may be added through driver update disks, and they will be loaded.

    To enforce loading of unsupported modules during boot and afterwards, please use the kernel command line option oem-modules.

    While installing and initializing the module-init-tools package, the kernel flag "TAINT_NO_SUPPORT" (/proc/sys/kernel/tainted) will be evaluated. If the kernel is already tainted, allow_unsupported_modules will be enabled. This will prevent unsupported modules from failing in the system being installed. (If no unsupported modules are present during installation and the other special kernel command line option (oem-modules=1) is not used, the default will still be to disallow unsupported modules.)

  • If you install unsupported modules after the initial installation and want to enable those modules to be loaded during system boot, please do not forget to run depmod and mkinitrd.

Remember that loading and running unsupported modules will make the kernel and the whole system unsupported by SUSE.

14.6 IPv6 Implementation and Compliance

SUSE Linux Enterprise Server 11 is compliant to IPv6 Logo Phase 2. However, when running the respective tests, you may see some tests failing. For various reasons, we cannot enable all the configuration options by default, which are necessary to pass all the tests. For details, see below.

  • Section 3: RFC 4862 - IPv6 Stateless Address Autoconfiguration

    Some tests fail because of the default DAD handling in Linux; disabling the complete interface is possible, but not the default behavior (because security-wise, this might open a DoS attack vector, a malicious node on a network could shutdown the complete segment) this is still conforming to RFC 4862: the shutdown of the interface is a "should", not a mandatory ("must") rule.

    The Linux kernel allows you to change the default behavior with a sysctl parameter. To do this on SUSE Linux Enterprise Server 11, you need to make the following changes in configuration:

    • Add ipv6 to the modules load early on boot

      Edit /etc/sysconfig/kernel and add ipv6 to MODULES_LOADED_ON_BOOT e.g. MODULES_LOADED_ON_BOOT="ipv6". This is needed for the second change to work, if ipv6 is not loaded early enough, setting the sysctl fails.

    • Add the following lines to /etc/sysctl.conf

      ## shutdown IPV6 on MAC based duplicate address detection
      net.ipv6.conf.default.accept_dad = 2
      net.ipv6.conf.all.accept_dad = 2
      net.ipv6.conf.eth0.accept_dad = 2
      net.ipv6.conf.eth1.accept_dad = 2
            

      Note: if you use other interfaces (e.g., eth2), modify the lines. With these changes, all tests for RFC 4862 should pass.

  • Section 4: RFC 1981 - Path MTU Discovery for IPv6

    • Test v6LC.4.1.10: Multicast Destination - One Router

    • Test v6LC.4.1.11: Multicast Destination - Two Routers

    On these two tests ping6 needs to be told to allow defragmentation of multicast packets. Newer ping6 versions have this disabled by default. Use: ping6 -M want <other parameters>. See man ping6 for more information.

  • Enable IPv6 in YaST for SCTP Support

    SCTP is dependent on IPv6, so in order to successfully insert the SCTP module, IPv6 must be enabled in YaST. This allows for the IPv6 module to be automatically inserted when modprobe sctp is called.

14.6.1 IPv6 Support for NFSv3

Kernel configuration and NFS userland utilities have been updated to fully support NFSv3 over the IPv6 protocol. The same functionality for NFSv4 has already been enabled since SUSE Linux Enterprise 11 SP2.

14.6.2 IPv6 Support to AutoFS

AutoFS now mounts NFS volumes over IPv6.

14.6.3 Linux Virtual Server Load Balancer (ipvs) Extends Support for IPv6

The LVS/ipvs load balancing code did not fully support RFC2460 and fragmented IPv6 packets which could lead to lost packets and interrupted connections when IPv6 traffic was fragmented.

The load balancer has been enhanced to fully support IPv6 fragmented extension headers and is now RFC2460 compliant.

14.6.4 IP Set Support

Large firewall configurations that match against a multitude of IP addresses, ports, and network interfaces can take a long time to load and consume CPU cycles during evaluation.

IP set is an optimized extension module for iptables allowing to match large volumes of IP, network, or MAC addresses. Typical use of the module is creating an IP set and employing the iptables '-m set' option.

14.7 Other Technical Information

14.7.1 libica 2.1.0 Available since SLES 11 SP2 for s390x

The libica package contains the interface library routines used by IBM modules to interface with IBM Cryptographic Hardware (ICA). Starting with SLES 11 SP1, libica is provided in the s390x distribution in three flavors of packages: libica-1_3_9, libica-2_0_2, and libica-2_1_0 providing libica versions 1.3.9, 2.0.2, and 2.1.0 respectively.

libica 1.3.9 is provided for compatibility reasons with legacy hardware present e.g. in the ppc64 architecture. For s390x users it is always recommended to use the new libica 2.1.0 library since it supports all newer s390x hardware, larger key sizes and is backwards compatible with any ICA device driver in the s390x architecture.

You may choose to continue using libica 1.3.9 or 2.0.2 if you do not have newer Cryptographic hardware to exploit or wish continue using custom applications that do not support the libica 2.1.0 library yet. Both openCryptoki and openssl-ibmca, the two main exploiters for the libica interface, are provided starting with SLES 11 SP2 to support the newer libica 2.1.0 library.

14.7.2 YaST Support for Layer 2 Devices

YaST writes the MAC address for layer 2 devices only if they are of the card_types:

  1. OSD_100

  2. OSD_1000

  3. OSD_10GIG

  4. OSD_FE_LANE

  5. OSD_GbE_LANE

  6. OSD_Express

Per intent YaST does not write the MAC address for devices of the types:

  1. HiperSockets

  2. GuestLAN/VSWITCH QDIO

  3. OSM

  4. OSX

14.7.3 Changes to Network Setup

The script modify_resolvconf is removed in favor of a more versatile script called netconfig. This new script handles specific network settings from multiple sources more flexibly and transparently. See the documentation and man-page of netconfig for more information.

14.7.4 Memory cgroups

Memory cgroups are now disabled for machines where they cause memory exhaustion and crashes. Namely, X86 32-bit systems with PAE support and more than 8G in any memory node have this feature disabled.

14.7.5 MCELog

The mcelog package logs and parses/translates Machine Check Exceptions (MCE) on hardware errors (also including memory errors). Formerly this has been done by a cron job executed hourly. Now hardware errors are immediately processed by an mcelog daemon.

However, the mcelog service is not enabled by default resulting in memory and CPU errors also not being logged by default. In addition, mcelog has a new feature to also handle predictive bad page offlining and automatic core offlining when cache errors happen.

The service can either be enabled via the YaST runlevel editor or via commandline with:

chkconfig mcelog on
rcmcelog start

14.7.6 Locale Settings in ~/.i18n

If you are not satisfied with locale system defaults, change the settings in ~/.i18n. Entries in ~/.i18n override system defaults from /etc/sysconfig/language. Use the same variable names but without the RC_ namespace prefixes; for example, use LANG instead of RC_LANG. For more information about locales in general, see "Language and Country-Specific Settings" in the Administration Guide.

14.7.7 Configuration of kdump

kdump is useful, if the kernel is crashing or otherwise misbehaving and a kernel core dump needs to be captured for analysis.

Use YaST (System › Kernel Kdump) to configure your environment.

14.7.8 Configuring Authentication for kdump through YaST with ssh/scp as Target

When kdump is configured through YaST with ssh/scp as target and the target system is SUSE Linux Enterprise, then enable authentication using either of the following ways:

  1. Copy the public keys to the target system:

    ssh-copy-id -i ~/.ssh/id_*.pub  <username>@<target system IP>

    or

  2. Change the PasswordAuthentication setting in /etc/ssh/sshd_config of the target system from:

    PasswordAuthentication no

    to:

    PasswordAuthentication yes
  3. After changing PasswordAuthentication in /etc/ssh/sshd_config restart the sshd service on the target system with:

    rcsshd restart

14.7.9 JPackage Standard for Java Packages

Java packages are changed to follow the JPackage Standard (http://www.jpackage.org/). For more information, see the documentation in /usr/share/doc/packages/jpackage-utils/.

14.7.10 Stopping Cron Status Messages

To avoid the mail-flood caused by cron status messages, the default value of SEND_MAIL_ON_NO_ERROR in /etc/sysconfig/cron is set to "no" for new installations. Even with this setting to "no", cron data output will still be send to the MAILTO address, as documented in the cron manpage.

In the update case it is recommended to set these values according to your needs.

15 Documentation and Other Information

  • Read the READMEs on the DVDs.

  • Get the detailed changelog information about a particular package from the RPM (with filename <FILENAME>):

    rpm --changelog -qp <FILENAME>.rpm
        
  • Check the ChangeLog file in the top level of DVD1 for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of DVD1 of the SUSE Linux Enterprise Server 11 Service Pack 3 DVDs. This directory includes PDF versions of the SUSE Linux Enterprise Server 11 Installation Quick Start and Deployment Guides.

  • These Release Notes are identical across all architectures, and are available online at http://www.suse.com/releasenotes/.

15.1 Additional or Updated Documentation

15.2 Product and Source Code Information

Visit http://www.suse.com/products/ for the latest product news from SUSE and http://www.suse.com/download-linux/source-code.html for additional information on the source code of SUSE Linux Enterprise products.

16 Miscellaneous

Colophon

Thanks for using SUSE Linux Enterprise Server in your business.

The SUSE Linux Enterprise Server Team.

Print this page