SUSE Enterprise Storage 4

Release Notes

SUSE Enterprise Storage is an extension to SUSE Linux Enterprise. It combines the capabilities from the Ceph storage project (http://ceph.com/) with the enterprise engineering and support of SUSE. SUSE Enterprise Storage provides IT organizations with the ability to deploy a distributed storage architecture that can support a number of use cases using commodity hardware platforms.

Manuals can be found in the docu directory of the installation media for SUSE Enterprise Storage. Any documentation (if installed) can be found in the /usr/share/doc/ directory of the installed system.

Publication Date: 2017-08-30, Version: 4.0.20170829

1 Support Statement for SUSE Enterprise Storage

Support requires an appropriate subscription from SUSE. For more information, see http://www.suse.com/products/server/.

General Support Statement

The following definitions apply:

  • L1: Installation and problem determination - technical support designed to provide compatibility information, installation and configuration assistance, usage support, on-going maintenance and basic troubleshooting. Level 1 Support is not intended to correct product defect errors.

  • L2: Reproduction of problem isolation - technical support designed to duplicate customer problems, isolate problem areas and potential issues, and provide resolution for problems not resolved by Level 1 Support.

  • L3: Code Debugging and problem resolution - technical support designed to resolve complex problems by engaging engineering in patch provision, resolution of product defects which have been identified by Level 2 Support.

SUSE will only support the usage of original (unchanged or not recompiled) packages.

2 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE. These features are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are mainly included for customer convenience and give customers a chance to test new technologies within an enterprise environment.

Whether a technology preview will be moved to a fully supported package later, depends on customer and market feedback. A technology preview does not automatically result in support at a later point in time. Technology previews can be dropped at any time and SUSE is not committed to providing a technology preview later in the product cycle.

Give your SUSE representative feedback, including your experience and use case.

2.1 Support for NFS Gateway for CephFS

As a technical preview, SES 4 supports the NFS gateway for CephFS.

2.2 Support for NFS Access to S3 Buckets

As a technology preview, SES 4 supports NFS access to S3 buckets.

The following are recommendations to anyone wishing to try this feature:

  • Mount the S3-backed NFS share with synchronized I/O enabled (-o sync).

  • If write access to the NFS share is not required, mount the share as read-only (-o ro). This prevents any accidents.

  • Appending to or modifying a file directly on the NFS share is not supported. Edit files locally, and then copy or move them to the NFS share.

  • When Ganesha has exported a set of S3 buckets, any new buckets/objects created via S3 will not be seen on the NFS share until Ganesha is restarted. The same holds true for any buckets/objects deleted via S3.

  • Removing directories on the NFS share is not supported.

3 New Features and Known Issues

3.1 ceph-deploy Is Deprecated and Will Be Replaced by deepsea

In this product version, ceph-deploy is still supported but it will not be part of any later product.

With SES 4, start using deepsea which is the new default Ceph deployment tool.

3.2 Calamari/Romana Are Deprecated and Will Be Replaced by openATTIC

Calamari/Romana are still supported in this release but will not be part of any further SES releases.

In SES 4, openATTIC is replacing Calamari/Romana.

3.3 Supported CephFS Scenarios and Guidance

With SUSE Enterprise Storage 4, SUSE introduces official support for many scenarios in which the scale-out and distributed component CephFS is used. This entry describes hard limits and provides guidance for the suggested use cases.

A supported CephFS deployment must meet these requirements:

  • All requirements of the generic Ceph cluster as described in our documentation.

  • A single active Ceph Metadata Server (MDS). A minimum of one, better two, standby MDS instances.

  • CephFS snapshots are disabled (default) and not supported in this version.

  • Clients are SUSE Linux Enterprise Server 12 SP2 based, using the cephfs kernel module driver. The FUSE module is not supported.

  • No directory may have more than 100,000 entries (files or subdirectories or links).

  • CephFS's metadata and data pool must not be erasure-coded. Cache tiering is not supported. Only replicated pools are supported.

  • CephFS supports file layout changes. However, while the file system is mounted by any client, new data pools may not be added to an existing CephFS filesystem (ceph mds add_data_pool). They may only be added while the file system is unmounted.

3.4 Support for radosgw Multi-site Replication

Earlier versions of SUSE Enterprise Storage only provided the multi-site functionality of radosgw as a technology preview. In SUSE Enterprise Storage 4 and up, this functionality is officially supported.

3.5 Support for AArch64

As of SES 4, the AArch64 architecture is fully supported.

4 Changes in Packaging and Delivery

4.1 CephFS Command-Line Tools

As of the Jewel release, the upstream Ceph project provides four command-line tools: cephfs and the newer tools cephfs-data-scan, cephfs-journal-tool, and cephfs-table-tool. The latter three constitute the equivalent of an "fsck" command (filesystem check) for CephFS.

The upstream Ceph project distributes the "fsck"-like tools as part of the package ceph-common. Previous versions of SES did the same. However, since using these tools requires special authorization on the server side, it is expected that they will only be run on cluster nodes.

In SES 4, the "fsck"-like tools (cephfs-data-scan, cephfs-journal-tool, and cephfs-table-tool) have been moved to the ceph-base package. cephfs is also shipped within this package but is deprecated and should not be used.

5 Miscellaneous

5.1 NVMe Only for Journals

NVMe disks are not supported as OSDs.

NVMe disks can only be used as journal devices.

6 How to Obtain Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at http://www.suse.com/download-linux/source-code.html. Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to mailto:sle_source_request@suse.com or as otherwise instructed at http://www.suse.com/download-linux/source-code.html. SUSE may charge a reasonable fee to recover distribution costs.

7 More Information and Feedback

  • Read the READMEs on the media.

  • Get detailed changelog information about a particular package from the RPM:

    rpm --changelog -qp <FILENAME>.rpm

    <FILENAME>. is the name of the RPM.

  • Check the ChangeLog file in the top level of first medium for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of first medium of the SUSE Enterprise Storage media. This directory includes a PDF version of the SUSE Enterprise Storage Administration Guide.

  • http://www.suse.com/documentation/ses/ contains additional or updated documentation for SUSE Enterprise Storage.

  • Visit http://www.suse.com/products/ for the latest product news from SUSE and http://www.suse.com/download-linux/source-code.html for additional information on the source code of SUSE Linux Enterprise products.

Copyright © 2016 SUSE LLC.

Thanks for using SUSE Enterprise Storage in your business.

The SUSE Enterprise Storage Team.

Print this page