Deeper Look at SUSE Enterprise Storage 7 | SUSE Communities

Deeper Look at SUSE Enterprise Storage 7

Share
Share

We just announced SUSE Enterprise Storage 7 (SES 7) and we’d like to dig a bit deeper into SUSE Enterprise Storage 7 with a series of articles (links will be added once they are live):

  1. Deeper Look at SUSE Enterprise Storage 7 (this blog)
  2. Upgrade your cluster before the end of life of SUSE Enterprise Storage 5.5
  3. Ceph native Windows client driver
  4. Kubernetes and SUSE Enterprise Storage 7

Major changes

The main changes in SES 7 are:

  1. Two deployment options.
  2. Native windows client driver.
  3. Updates to the Ceph Dashboard
  4. Ceph Octopus core updates. Ceph Octopus is the latest major release by the Ceph community and forms the foundation for SES7.
  5. In addition, we switched the underlying OS to SUSE Linux Enterprise Server 15 SP2 which brings in more hardware support and optimizations.

Let’s look at these separately.

Deployment options

SES 7 now comes with two disjunct deployment options:

  • If you want to setup a standalone cluster, the Salt-based DeepSea deployment tool was replaced with a new framework around cephadm and ceph-salt. Cephadm is a community developed deployment tool where the SUSE engineering team took the development lead. It will replace the three upstream stacks (DeepSea, ceph-deploy, ceph-ansible). This new framework is the base for additional easy to use deployment and day 2 operations.
  • If you are running Kubernetes and want to use the Kubernetes nodes as storage nodes, you can use SES 7 and deploy it using Rook on a Kubernetes cluster. Learn more about this in the follow up article.

Both deployment options use the same Ceph code base, they even share the same container images. So, check your use case and decide which ones is better for you. For everybody upgrading from SES 6, you will use the first option (standalone cluster). We only support new installations when deploying using Rook.

Standalone cluster with cephadm and ceph-salt

Let me introduce two new components:

  • ceph-salt is responsible for the host (OS) management, like setting up time server, and installing needed packages, and will bootstrap cephadm.
  • cephadm is responsible for managing Ceph itself. 

Cceph-salt is used for initial bootstrap, it adds nodes to an existing storage cluster, patches the OS on these nodes and it can reboot the cluster when necessary. The “ceph-salt config” command allows an easy initial configuration of your cluster.

Once ceph-salt has done the OS setup, cephadm will configure Ceph and launch all components. While ceph-salt updates the OS, cephadm will update the Ceph cluster by out new container images on all nodes. Cephadm also integrates with the Ceph orchestrator to install additional daemons as needed. You can interact with the Ceph orchestrator via the “ceph” command-line tool and the Ceph dashboard.

Updated Ceph Dashboard

The Ceph Dashboard integrates now with ceph-orchestrator and thus with cephadm to deploy services like OSDs, Rados or iSCSI gateways. You can now add an OSD disk from the dashboard. We plan to continue this integration to make the Ceph dashboard handle even more Day 2 operations.

User account handling has seen security improvements. An administrator can now configure password policies, ask for change of password at first login, disable user accounts, and easily clone user roles.

The overall UI has also seen enhancements – not only using SUSE’s new branding but especially better navigation with a vertical navigation menu and multi-select on tables to perform bulk operations.

A more detailed description of what’s new in the Ceph Dashboard is giving by the technical lead Lenz Grimmer, he’s also the SUSE engineering lead for this component, in the articles “New in Octopus: Dashboard Enhancements” and “New in Octopus: Dashboard Features“.

Updates to Ceph Core

Of the many improvements to Ceph itself, I’d like to point out three areas: Performance, Rados Block Device (RBD) and health alerting.

Listing of RGW buckets has been improved significantly. Recovery of objects is now much faster since only the modified portion of the object needs to be copied, this decreases latencies during recovery.

Mirroring of block devices now supports a snapshot-based mode and cloning of a disk preserves now the sparseness of the underlying objects thus greatly reducing storage needed.

Health alerts have been improved, they can be muted now – either temporarily or permanently. If a daemon crashes, administrators can get notified about those. A newly introduced ‘simple’ alert module allows sending emails without using any external monitoring infrastructure.

A nearly complete list of changes is in the Ceph upstream release announcement by my colleague Abhishek Lekshmanan, who’s in charge of releasing Ceph upstream.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet

Avatar photo
5,057 views