Here at SUSE, we are proud to continue our rapid pace of innovation in bringing the best to our customers with the release of SUSE Enterprise Storage 2. This release does far more than just update the code base to the Ceph Hammer release, and I will attempt to outline some of the great new functionality here.
First and most visible is the addition of horizontally scaling iSCSI support. You read that right; SUSE is bringing a highly available, scale-out iSCSI solution to Ceph. As always, it is open source and either already upstreamed or in the process of being upstreamed. This implementation leverages LIO and Ceph’s well-established RBD infrastructure and binds them together using lrbd, a new utility daemon (public source can be found here: https://github.com/swiftgist/lrbd/wiki). Lrbd provides a way to build, distribute and update the iSCSI configuration across multiple lrbd nodes without having to touch individual configuration files on each. This is a big win for storage administrators looking to adopt this rapidly maturing technology.
The second important feature is the addition of on-disk encryption. The basics of the implementation are quite simple: a key server is included with the installation, and every OSD node will contact the key server at startup to get the necessary key to unlock the drives. This provides a new layer of security that allows a storage administrator to sleep soundly knowing that if a drive does “walk off,” the data on it will be unusable.
The third major feature is a Crowbar installation appliance for low-touch implementations. This feature is all about improving the ability to rapidly consume a basic SUSE Enterprise Storage deployment. The Crowbar appliance has been used as part of our SUSE OpenStack Cloud provisioning tool for quite some time and enables simplified node monitoring and deployment with very few knobs that need twisting. Don’t expect a deployed configuration where the admin can make tons of adjustments; rather understand that this is intended to service those who have a small number of fixed use cases.
A fourth item in our announcement is collaboration with multiple partners to bring Ceph solutions on ARM hardware to enterprise customers. This is a work-in-progress that I am fairly involved with and find exciting. Choice is what customers are interested in here, and ARM-based solutions in the scale-out storage space seem to make a lot of sense.
I hear you saying, “Okay. Wow. These features are great, but how well do they work?” I can answer that for three of the four so far, and that answer is “Quite well.” The three I have hands-on experience with to date are iSCSI, the Crowbar installer and the collaboration around 64-bit ARM.
The Crowbar appliance is something I have worked with many times and found to be simple to use and show to new Ceph users who aren’t yet comfortable with deploying via the cli. It offers a few options in the proposal (such as whether to use SSDs for journals), but overall, just makes it simple to deploy, especially if your goal is to deploy an object storage infrastructure or storage for our SUSE OpenStack Cloud environment.
The iSCSI deployment also shows impressive thought in how the solution is architected and implemented. I maintain a copy of a config file that I use lrbd -w to upload. This config file is fairly easy to follow and maintain. Testing with Windows and Linux clients, I have found the iSCSI to offer fairly strong performance, especially when working with high-speed networks (faster than 1GbE) for the cluster.
In regards to the 64-bit ARM work, I won’t say too much, other than our build service (look at build.opensuse.org to get an idea) has made it fairly easy to get this work underway. Other than that, keep your eyes out for more information in the coming weeks and months.