Revolutionizing Linux Distributions with an Adaptable Linux Platform

Share
Share

A better lifecycle for Linux distributions

It is time to challenge the status quo of Linux distribution generation to create a new version that better addresses the industry’s current challenges. The solution is a container Linux distribution based on SUSE’s Adaptable Linux Platform (ALP) technology that is designed for cloud native workloads.

Debian and Slackware were created 30 years ago, and SUSE Linux in 1994 shortly after. For all these years, several companies have been offering a supported version of Linux to enterprises that need to know that their Linux is safe to use in production, while some other distributions were made available for free for any purpose. In most cases, if you are a company that uses Linux (and there was a time that this statement wouldn’t apply to most of the companies out there), you really benefit (or need) from having somebody to call when you have a problem. To ease your mind further, certifications give you reasonable confidence that the hardware and software you use will run correctly for your application with no major problems. And it works amazingly well.

S.u.S.E.

 

Applications have been running for decades on those systems, with very few problems that were solved quickly by a bunch of engineers who were paid to fix them. Most of the innovations today would not be possible without those. Those engineers would provide you with a very stable environment, in many cases even looking at the code for newer versions and porting it back to the version you were running.

Currently, innovation requires a different speed. Kubernetes releases, for instance, are created every roughly four months and updated for a year. Very complex support matrixes, including components and versions of the underlying OS, are required to make everything happen.

But what is an OS?

We can piggyback on the shoulders of giants and use some definitions from Tannenbaum et al.

An OS is a component that manages the hardware connected and then offers a clean abstraction of those resources to the application programs.

But Linux distributions are more than that. Of course, there is a lot of work on enabling hardware and making it compatible with the latest version of the kernel and glibc (the API offered to the users that follow the POSIX standard for Linux and others). However, there are other tools that make the API accessible; for instance, to be able to configure the network cards without having to call the API directly.

On top of that, because Linux distributions were created in a time when the Internet was not as great today (try to connect today to any page using a 64K modem), many applications were included, from email clients to photo editors. That is great when downloading something can take days, but of course, it doesn’t provide the latest updates and versions when you reinstalled it a few months later, so your first action after an installation is normally a full update (if you are connected to the Internet). Enterprise distributions actually used the same model and offered many of those applications, too, even if they were not used in the product by anybody (it was hard to know without telemetry, but it was likely).

What is the problem?

Installation of an extensive application (CRM, ERP, DataWarehouse, etc) requires careful planning as well as customizations and integrations. You need to be sure that every piece being deployed is certified and compatible. That means vendors test their applications against different versions of the Operating System, Databases, etc.

You don’t change those applications often (for instance, I know  telcos with ten different billing systems because the risk of migrating to a new modern billing system is too high for them, and some systems run for years with the same stack in obsolete hardware in the datacenter). On some other cases, industries are heavily regulated and require a support contract for any software running. For those reasons, many vendors offer long-term support after a few years of standard support, which basically allows you to read the documentation and ask for advice, and sometimes provide fixes for high-impact vulnerabilities, but limits heavily the investment on the product, reducing the cases where you can open a ticket and get a fix or a backport.

Why backporting? Well, patches are normally developed in open source for the latest version first, along with refactoring and updates to the code. If you can’t upgrade to the latest version, you can’t get the needed patch. Vendors will try to find a solution to the bug in the current version by looking in the latest version. If there is a fix there, they will  see if it is possible to upgrade the patch to work with the old code base. Even with a suitable patch, the process requires careful examination and tests, which is costly, and some time it does not work (i.e. when the bug disappeared after a reengineering of the code)

One of the main values of a Linux distribution is to provide the stability required for these applications, but that requires in turn (when done properly) plenty of resources to maintain different versions of the same package. On top of all that, some of them will also add security certifications, third party validation that the code is secure, so you are positive you can run your workloads if you need Common Criteria or FIPS certification (i.e. PCI-DSS for credit card transactions).

For several decades a Linux distribution support contract would include:

Hardware support and certification: An operating system (OS) primarily serves two critical functions: facilitating hardware abstraction—making hardware components work seamlessly with software—and managing system resources efficiently. Upon its release, an OS undergoes extensive testing or certification with a wide range of hardware configurations. The compatibility list is influenced by the relationships between hardware manufacturers and OS vendors. Compatibility also depends significantly on factors like the version of the Linux kernel and the libraries included in the OS. Over time, ensuring that a combination of hardware and software remains compatible becomes increasingly challenging, especially with introducing new hardware components. 

Defect resolution: All code has bugs, and new versions of the applications are constantly released to add new features and fix bugs. In most cases, the patches will be later delivered upstream, too, to be part of the standard code.

Updates: Do you want to be sure that your software is downloaded from a secure location? Distributions include a selection of packages created and published with a secure workflow that have been tested so they work well together.

Security fixes: There are always bugs in the code, but security problems put your machine at risk. If you have a faulty security code, it is possible to get unauthorized access to your system and get or change information where it should not be possible.

Technical support: You have all the knowledge in-house to identify the problem, find a solution, and install and configure the patch required to fix the problem. 

Documentation: Sometimes the problem is finding the documentation required to install and upgrade the software, and having it available in a single place simplifies management greatly.

That doesn’t look like a problem…

Well, it was not a problem and has not been for years. It was a well-defined process that required months of testing and careful planning. At the time,  the customer application development took years and required full stability anyway, so it was ok. But now  scrum and agile have opened the opportunity to create new versions several times a day and deploy them in production as often as possible, with automated build, test and deployment. Technology has been updated so instead of taking weeks or months to create an environment,  an automated system that can take create a VM and get all ready to work in minutes, or update the code deployed automatically as soon as it is published in a git repository.

So what is the problem, then? It is so big that it has a name:  “Dependency hell”

  1. Sometimes, you need a version of a package that has not been tested or updated and thus, is not part of the core set. You could wait until the new version of the OS is released, but that can take months, and rewriting feature X so you can use an older version will create tech debt, so it is not an option.
  2. In many cases, there is no real way to update packages securely at the speed of the upstream projects, especially if the requirement includes being able to create patches for it.
  3. It is likely that you find conflicting dependencies. A dependency requires a library or package version 1. Another version 2. And they can’t be installed together, so you are forced to decide which version you need and fix the dependency tree. 
  4. In some cases, projects that created the dependencies are abandoned or not properly maintained. The applications using those dependencies break or introduce security problems that are not easy to fix when the code is not yours.

Can we solve the problem easily? Well, no. We can mitigate the problem as it has been done in the past, using version numbering with “semantic versioning” in the packages and then using a smart package management system that can resolve dependencies appropriately. A lot of effort is put into the packages that are part of a distribution to make them compatible with all others, even backporting patches from one version to another.

A better way

We could reduce the dependency problem by reducing the number of packages included in a distribution, making it necessary to download the applications somewhere else,  and making the developer responsible for updating it (like MacOS does). In a few years, if the developer is not active, you are out of luck. But that also reduces the value of the distribution.

With containers, a new solution has appeared. When we use containers, dependencies can be packaged with the application itself. If you no longer require the libraries and dependencies to be compatible with every other application, it is way easier to reach a suitable dependency tree, as long as each application can include its own version of those dependencies.

However, using containers at scale is not free; most companies today use Kubernetes, which requires new abstractions to deploy applications. Kubernetes is complex, and for many applications that are not working with microservices, designed to be scalable it can be an overkill. But what if we could package and deliver the applications in containers instead of the way it has been done in traditional Linux?

  1. We could reduce the hardware abstraction layer to a minimum. Maintain the certifications and the stability of the core layer, making sure that your hardware is working.
  2. Then, we could deliver the applications and libraries as containers. By reducing the dependencies we would be able to have different versions of the dependencies installed inside the container without conflicting requirements. Reduce the problem to a manageable size
  3. So we could offer different versions of the applications, testing and supporting them only against the required version of the dependencies. No more dependency hell and a very limited need for backporting, making everything easier to deliver.
  4. We could make sure that the OS is so easy to manage that you don’t need to think about it when working on your application through automation and self-healing. 

A new lifecycle

The novel OS would feature distinct life cycles for the core and applications, presenting opportunities for strategic optimization:

Enhanced Core Component Agility:

  • Liberating core OS components from dependencies with applications permits frequent updates to the hardware-interfacing components and libraries. Striking a balance between certification timelines — extending up to 18 months — and the need to integrate new hardware functionalities from various architectures (ARM, Intel, AMP, IBM, etc.) becomes crucial. Still, it is easier to provide a solution at a reasonable cost.
  • It’s imperative for the OS to ensure seamless transitions with each update by preserving API compatibility for the workloads. Now it is easier and faster, as there is no hard dependency on OS components.

Flexible Application Support:

  • Decoupling the content from the OS enables you to have different support for different components. More than one version can be available simultaneously. You can use the latest version, even the beta, if you need it for development with limited support while supporting old versions of your application on a more stable version for production.

Extended Long-Term Support Options:

  • Beyond community-offered long-term support, the OS provider can extend support for critical components as long as needed. This ensures that applications requiring prolonged stability have a dependable platform, reinforcing the stability of crucial systems but not forcing it on all components and libraries at the same time.

Compartmentalized Solutions:

  • The modular nature of the OS enables the creation of solutions tailored to specific use cases. Whether introducing a new hypervisor or integrating cutting-edge components for confidential AI workloads on a novel chipset, this compartmentalization allows for precise customization to meet their unique requirements.
  • Addressing specific use cases becomes more agile and efficient, with the OS serving as a versatile foundation adaptable to evolving technological demands.

So, is this available?

Absolutely; the solution is the Adaptable Linux Platform developed by SUSE, a new platform and code base that has been in development for years, and it is getting ready for production as we speak.

Some required parts are available today and have been in production for years (like SLE Micro), but SUSE has gone a step beyond a comprehensive solution for the data center. This all-encompassing solution includes the OS and the essential tools for constructing and executing certified containers within the OS framework.

Engineered to function as the inconspicuous OS, it’s designed to be effortlessly manageable, user-friendly, and deployable. This new platform seamlessly integrates familiar Linux server components that you expect to have available, such as the dhcpd daemon or the Apache http server. Our mission is to offer a platform that embodies both support and agility simultaneously. We aim to deliver the components you need. Allowing you to customize your version. So you can concentrate on revenue generation while entrusting SUSE with the security and support of the underlying OS.

Feel free to download the solution today and embark on your journey with it. We encourage you to provide feedback and engage with us (I am the product manager). Let’s collaboratively explore how we can enhance this platform even further — the open-source way. Your insights and experiences are invaluable in shaping a product that truly aligns with your needs.

 

ALP: Adaptable Linux Platform

SUSE’s Adaptable Linux Platform allow developers focus on the workloads while keeping independent from the hardware an…susealp.io

https://adaptablelinuxplatform.io/

References

Modern Operating Systems. Andrew S. Tanenbaum et all.

https://www.pearson.com/en-us/subject-catalog/p/modern-operating-systems/P200000003311/9780133591620?tab=title-overview

SUSE Support Policies

Product Lifecycle Support Policies | SUSE

Choose a product from the list below to read more about its unique support policies.www.suse.com

TSANet

TSANet has 25+ years experience creating collaborative platforms for 100’s of Vendor…tsanet.org

 

Share
(Visited 15 times, 1 visits today)
Avatar photo
5,955 views