Share with friends and colleagues on social media

On the face of it getting your head around SUSE Enterprise Storage (SES) is fairly easy.

I mean, if you want large quantities of data storage to hold stuff like backups, surveillance videos, archived material, analytical datasets, research data,  anything on your user’s PCs; and have that storage almost infinitely scalable, be self-healing, self-managing, highly available, and very cost effective then SES is your answer.  It is open source software that installs on industry standard severs and disk to form a highly available cluster with no single point of failure for the data.  There is no proprietary hardware or software.  No high cost licenses or capacity charges.    Only support subscription and that is it.

There is still a place for storage vendor solutions.  Low-latency, high IO operations such as high performance databases may require the level of performance a proprietary storage solution can guarantee to deliver.  This Tier 1 data requirement is typically only 20% of an organisations data.

Even with hierarchical storage options a single storage product solution can be extraordinarily expensive to purchase and maintain, and enterprise level scaling an issue.

So a lower cost highly scalable (hundreds of Terabytes to Exabytes scalable) that runs on commodity servers could be just the ticket.

Easy right?

Trying to explain in more detail becomes somewhat trickier. Drill down to the technical layers and very little is written down in plain language but liberally interspersed with weird … yet somehow logical … open source development community jargon.

It all starts off well, SES offers the choice of Block (storage for servers), Object (storage from applications), and File (network shares).  The data is stored on multiple disks over multiple servers in multiple chunks so the loss of any storage component does not mean data is lost.  SES automatically manages data placement and protection.

But then it starts getting a little freaky.  Coming from a maritime nation I can to some degree identify with what follows but the engineering and development guys are way ahead:

SUSE Enterprise Storage is based on an open source project called Ceph.   The cephalopod references continue when referring to the various releases. It all started with a Jewel, then Luminous Ceph, which morphed in to a Ceph Mimic, now has become a Nautilus, and looks like the next Ceph release will be an Octopus.  SUSE does not always release a SES version based on every Ceph release.  Pity, because the Mimic Octopus is pretty cool (google it), in fact all Cephs are cool. We also have used functionality from Calamari which also has an alternative named Kraken.  Recently we have been using Salt to deploy the cluster nodes, first had Pillar of Salt and Grains of Salt.  At SES 5 we enhanced the product with Deepsea Salt.  Maybe the next open sauce project could be Tartare.
From a physical perspective it is much easier.  Objects are deployed in Buckets.  Thought that was it for aquatic references until I was learning about the data protection algorithm CRUSH … seen Nemo?  … Dude?


Share with friends and colleagues on social media

Category: Ceph, Popular Topics, Products, Software-defined Infrastructure, Software-defined Storage, SUSE Enterprise Storage
This entry was posted Tuesday, 26 March, 2019 at 4:21 pm
You can follow any responses to this entry via RSS.

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet