Share with friends and colleagues on social media

In the IT industry we collectively suffer from a severe case of bipolar disorder. On the one hand “we want stability and no changes” on the other hand “we want new features and innovation now.” These extremes provide ample ground for debate, models for distribution life cycles, and last but not least opportunities to come up with new ideas.

One of these new ideas is the introduction of modules into the SUSE Linux Enterprise Server 12 distribution. The idea is simple enough, bridge the gap of the two extremes of change vs. stability by providing different life cycles for different parts of the distribution while providing Enterprise level support.

SUSE Linux Enterprise Server 12 comes along with 5 modules, the Advanced Systems Management Module, Legacy module, Public Cloud Module, Toolchain Module, and Web & Scripting Module. Each has it’s own life cycle policy.

Update 2017-01-23: In addition to the finalization of the Toolchain module, which had been announced when SUSE Linux Enterprise Server 12 was released the High Performance Computing (HPC) module has been added to SUSE Linux Enterprise Server 12 shortly after the release of SUSE Linux Enterprise Server 12 Service Pack 2 (SLES 12 SP2).

Without further ado, lets dive right in and see how this works in practice. The modules work in conjunction with the base SUSE Linux Enterprise Server 12. In the “core”, “base” , or “insert another overloaded term” here, you find what you expect. The kernel, glibc, other runtime environments etc. to the tune of a fully functional distribution with 3374 packages wrapped up neatly in the ISO image. This also includes the Python interpreters, Perl, and Ruby, more on the scripting stuff later. For these 3374 packages the SUSE Linux Enterprise 12 support and life cycle promise is 10 +3 years of support with no incompatible version changes when service packs are released. This provides the stability end of the spectrum, no ifs, whens, or buts about it.

From a developer perspective we all know how annoying it can be when a new project is started and the toolchain for the project is held back by a stability promise that, to be frank, makes no sense to a developer what so ever. However, even in the developer community we have a very wide gap between fast moving and the other end of the spectrum. There are many applications, and the developers behind those applications, that want the stability promised by a 10 +3 year distribution life cycle. Having worked in that area I can tell you that porting an application with tens of millions of lines of code to a new compiler is not something one does over night. Inevitably the compiler guys have “figured” out how to optimize something new and thus the application runs faster to the crash, when previously it didn’t crash. Those issues are of course not always compiler issues, but never the less it’s a lot of work to determine the root cause and fix it.

Sticking with the developer point of view for a bit we can pull the modules into the discussion. The Legacy Module contains packages that developers really shouldn’t depend on anymore. You can find an older version of SSL and libstdc++ there. These packages are provided to help applications to support SUSE Linux Enterprise 12, but have a limited life time. The Legacy Module has a life span of 3 years which should give ISVs plenty of time to migrate their applications away from the dependency on old stuff. There is of course old stuff that will never go away, such as Motif. For SUSE Linux Enterprise Server 12, Motif lives in the Workstation Extension add on product, note, WE is not a module, it is a product. Therefore, WE has different rules and is out of scope of my current ramblings. I wanted to mention WE, which also has shiny new things, to avoid any confusion and keep people from thinking that Motif (which lands on the not so new side of the spectrum) ended up in a module that will go away.

The Legacy Module is intended to help developers of applications that are not necessarily easily ported to move forward in time by providing a 3 year bridge to the past. The packages you find here basically comprise dependencies one can get rid off by a re-compile on a different system with newer libraries. Yes, it is never that easy and some code changes are inevitable.

Taking a look at the faster moving crowd of web developers the Web & Scripting Module is intended to meet those needs. While the interpreters are in the base distribution and might give us trouble moving forward with respect to the “no incompatible version changes” over 10 +3 years many of the needed tools and modules for the dynamic languages are in the Web & Scripting Module. The Web & Scripting Module has a 3 year life cycle with an 18 month over lap. Thus, roughly every 18 months you’ll see version upgrades in the packages contained in the Web & Scripting module. Overlay the Service Pack release cycle on top of this and it turns out that version bumps come along in the Web & Scripting module just about every service pack release and every other service pack release the older versions go out of support. Thus, for those starting new projects roughly every 18 months, you get to start with an updated scripting tool chain every time. Running on top of an Enterprise distribution with 10 +3 year support.

Update 2017-01-23: The Toolchain module is still coalescing. However, the idea is pretty straight forward. Pretty much every year the toolchain for compiled languages gets a bump. Support details, such as overlap etc. are still being worked out. The Toolchain module is expected to hit the ground running sometime in the middle of this year, 2015. The Toolchain module provides the toolchain for compiled code, C/C++ etc. It is a one way train that, if you want support you cannot leave once you are on. While this sounds a bit scary let me explain how it works and take the scary part away. The Toolchain module provides new tools that can be installed in parallel to the tool chain that was used to build the distribution. Thus when you install the packages from the Toolchain module you can choose to build you application with the system tool chain or the newer tool chain. Using the new tool chain can be advantageous if you have an application that will gain improvements from new instructions in HW that the system tool chain may not be able to generate. Once you decide to use the new tool chain you will get full support, in the same way you get support for the system tool chain, until the next version is released. Once the next version is available you are expected to move to the new version. In practice this means that you can of course continue to use and support your application with whatever level of the tool chain. However, unless you use the latest version of the tool chain you do not have the option to call in and ask to get a fix for a tool chain that is 2 releases old and you just found out that there is a bug in an optimization switch. So this is where the “you cannot get off the train part” comes into play. The Toolchain module roughly follows the release cycle of the upstream GCC project with a delay for packaging and testing.

The Legacy Module, Toolchain Module, and Web & Scripting Module should hit the mark for a large number of developers, open source or proprietary applications. For the systems administrator they provide the necessary backing to support running those applications within an organization on top of SUSE Linux Enterprise 12 with the assurance that there is someone at the end of the phone line when there is a bad day and things happen to go awry.

This nicely brings us to the point of wearing a systems administrator hat for a bit. For systems administrators the Public Cloud is becoming ever more important. Organizations are moving work loads into the Public Cloud and images and instances have to be managed in a number of frameworks. The tools to handle this are delivered in the Public Cloud Module. In addition to making images available, on demand and bring your own subscription, in the predominant Public Cloud implementations SUSE delivers the tools to build your own images in the Public Cloud Module. Each cloud framework treats instance initialization differently and the packages in the Public Cloud Module deliver what you need. The Public Cloud Module also delivers the tools for OpenStack clouds, although none of the current top 3 public cloud providers uses OpenStack as their framework. The public cloud infrastructure is very dynamic, with new services and features being introduced almost constantly. In an effort to accommodate this break neck speed the Public Cloud Module has no set life cycle, rather the repository operates on the basis of continuous integration. Tools are updated as needed to ensure users can take advantage of the latest features announced by the public cloud providers.

The Advanced Systems Management Module delivers the DevOps tools and the Machinery toolchain. Especially the Machinery toolchain is moving rather quickly and thus, just like the Public Cloud Module the Advanced Systems Management Module follows a continuous integration life cycle. Machinery brings together image building, system inspection, system migration and other aspects of systems management and administration.

Update 2017-01-23: The HPC module coalesces around the openHPC initiative. Now of course we all know that HPC is a terribly overloaded concept and people consider anything from High Frequency Trading to Crash Simulations as HPC. But let me tell you, the computing requirements for these different HPC work loads are very very different. Anyway, the point being, if you have anything that you think is HPC and is missing in the module work with the openHPC community or directly with us to see if we cannot meet your HPC needs, if they are not already met with the HPC module. The packages in the module will see releases roughly aligned with the upstream openHPC project with support covering the latest version. The goal is to make it possible to install multiple versions of HPC tools in parallel.

This was a quick run down of the functionality encapsulated in each module and the life cycles for each module. In combination with the 10 +3 year stable base of SUSE Linux Enterprise Server 12 the modules should provide the needed flexibility for developers and systems administrators in a reasonably easy to consume supported package (as in box.) At least that is the idea.

Speaking of reasonably easy to consume, one has to ask of course how to get a hold of these wonderful modules. Well, the modules are of course fully integrated into YaST. During installation from the .iso image one has the opportunity to add the modules on the spot. The modules are included in the SUSE Linux Enterprise subscription and thus no extra activation codes are required. The module repositories are also available to SUSE Manager and SMT (Subscription Management Tool.) Adding the repositories to SUSE Manager or SMT makes them available to all the connected clients. Last but not least there is always the command line, one of my favorites

zypper ar URL alias

will do the trick, where the URL can point to a local SMT server or to the SUSE Customer Center. At a bit higher level you can use SUSEConnect to let it deal with the repository setup.

SUSEConnect –url URL –product THE_MODULE

The URL simply points to SCC or the SMT server top level and you do not have to know the exact path to the repository definition, unlike with the zypper command.

In summary, modules are, in my opinion, a great way to bridge the gap in the tug of war we find ourselves in on a daily basis. Parts of the distribution move very slowly, if at all, while other parts move rather quickly.

Share with friends and colleagues on social media
(Visited 1 times, 1 visits today)
Tags:
Category: Enterprise Linux, Server, SUSE Linux Enterprise, SUSE Linux Enterprise Server, Technical Solutions
This entry was posted Monday, 9 February, 2015 at 12:11 pm
You can follow any responses to this entry via RSS.

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet