Rancher Wrap: Another Year of Innovation and Growth

Monday, 12 December, 2022

2022 was another year of innovation and growth for SUSE’s Enterprise Container Management business. We introduced significant upgrades to our Rancher and NeuVector products, launched new open source projects and matured others. Exiting 2022, Rancher remains the industry’s most widely adopted container management platform and SUSE remains the preferred vendor for enabling enterprise cloud native transformation. Here’s a quick look at a few key themes from 2022.  

Security Takes Center Stage 

As the container management market matured in 2022, container security took center stage.  Customers and the open source community alike voiced concerns around the risks posed by their increasing reliance on hybrid-cloud, multi-cloud, and edge infrastructure. Beginning with the open sourcing of NeuVector, which we acquired in Q4 2021, in 2022 we continued to meet our customers’ most stringent security and assurance requirements, making strategic investments across our portfolio, including:  

  • Kubewarden – In June, we donated Kubewarden to the CNCF. Now a CNCF sandbox project, Kubewarden is an open source policy engine for Kubernetes that automates the management and governance of policies across Kubernetes clusters thereby reducing risk.  It also simplifies the management of policies by enabling users to integrate policy management into their CI/CD engines and existing infrastructure.  
  • SUSE NeuVector 5.1 – In November, we released SUSE Neuvector 5.1, further strengthening our already industry leading container security platform. 
  • Rancher Prime– Most recently, we introduced Rancher Prime, our new commercial offering, replacing SUSE Rancher.  Supporting our focus on security assurances, Rancher Prime offers customers the option of accessing their Rancher Prime software directly from a trusted private registry. Additionally, Rancher Prime FIPS-140-3 and SLSA Level 2 and 3 certifications will be finalized in 2023.

Open Source Continues to Fuel Innovation 

 Our innovation did not stop at security. In 2022, we also introduced new projects and matured others, including:  

  • Elemental – Fit for Edge deployments, Elemental is an open source project, that enables centralized management and operations of RKE2 and K3s clusters when deployed with Rancher. 
  • Harvester SUSE’s open-source cloud-native hyper-converged infrastructure (HCI) alternative to proprietary HCI is now utilized across more than 710+ active clusters. 
  • Longhorn – now a CNCF incubator project, Longhorn is deployed across more than 72,000 nodes. 
  • K3s – SUSE’s lightweight Kubernetes distribution designed for the edge which we donated to the CNCF, has surpassed 4 million downloads. 
  • Rancher Desktop – SUSE’s desktop-based container development environment for Windows, macOS, and Linux environments has surpassed 520,000 downloads and 4,000 GitHub stars since its January release. 
  • Epinio – SUSE’s Kubernetes-powered application development platform-as-a-service (PaaS) solution in which users you can deploy apps without setting up infrastructure yourself has surpassed 4,000 downloads and 300 stars on GitHub since its introduction in September. 
  • Opni – SUSE’s multi-cluster observability tool (including logging, monitoring and alerting) with AIOps has seen steady growth with over 75+ active deployments this year.  

 As we head into 2023, Gartner research indicates the container management market will grow ~25% CAGR to $1.4B in 2025. In that same time-period, 85% of large enterprises will have adopted container management solutions, up from 30% in 2022.  SUSE’s 30-year heritage in delivering enterprise infrastructure solutions combined with our market leading container management solutions uniquely position SUSE as the vendor of choice for helping organizations on their cloud native transformation journeys.  I can’t wait to see what 2023 holds in store! 

Understanding Hyperconverged Infrastructure at the Edge from Adoption to Acceleration

Thursday, 29 September, 2022

You may be tired of the regular three-tiered infrastructure and the management issues it can bring in distributed systems and maintenance. Or perhaps you’ve looked at your infrastructure and realized that you need to move away from its current configuration. If that’s the case, hyperconverged infrastructure (HCI) may be a good solution because it removes a lot of management overhead, acting like a hypervisor that can handle networking and storage.

There are some key principles behind HCI that bring to light the advantages it has. Particularly, it can help simplify the deployment of new nodes and new applications. Because everything inside your infrastructure runs on normal x86 servers, adding nodes is as simple as spinning up a server and joining it to your HCI cluster. From here, applications can easily move around on the nodes as needed to optimize performance.

Once you’ve gotten your nodes deployed and added to your cluster, everything inside an HCI can be managed by policies, making it possible for you to strictly define the behavior of your infrastructure. This is one of the key benefits of HCI — it uses a single management interface. You don’t need to configure your networking in one place, your storage in another, and your compute in a third place; everything can be managed cohesively.

This cohesive management is possible because an HCI relies heavily on virtualization, making it feasible to converge the typical three tiers (compute, networking and storage) into a single plane, offering you flexibility.

While HCI might be an overkill for simple projects, it’s becoming a best practice for various enterprise use cases. In this article, you’ll see some of the main use cases for wanting to implement HCI in your organization. We’ll also introduce Harvester as a modern way to get started easier.

While reading through these use cases, remember that the use of HCI is not limited to them. To benefit most from this article, think about what principles of HCI make the use cases possible, and perhaps, you’ll be able to come up with additional use cases for yourself.

Why you need a hyperconverged infrastructure

There are many use cases when it comes to HCI, and most of them are based on the fact that HCI is highly scalable and, more importantly, it’s easy to scale HCI. The concept started getting momentum back in 2009, but it wasn’t until 2014 that it started gaining traction in the community at large. HCI is a proven and mature technology that, in its essence, has worked the same way for many years.

The past few decades have seen virtualization become the preferred method for users to optimize their resource usage and manage their infrastructure costs. However, introducing new technology, such as containers, has required operators to shift their existing virtualized-focused infrastructure to integrate with these modern cloud-based solutions, bringing new challenges for IT operators to tackle.

Managing virtualized resources (and specifically VMs) can be quite challenging. This is where HCI can help. By automating and simplifying the management of virtual resources, HCI makes it easy for developers and team leads to leverage virtualization to the fullest and reduce the time to market their product, a crucial factor in determining the success of a project.

Following are some of the most popular ways to use HCI currently:

Edge computing

Edge computing is the principle of running workloads outside the primary data centers of a company. While there’s no single reason for wanting to use edge computing, the most popular reason is to decrease customer latency.

In edge computing, you don’t always need an extensive fleet of servers, and the amount of power you need will likely change based on the location. You’ll need more servers to serve New York City with a population of 8.3 million than you’d need to fill the entire country of Denmark with a population of 5.8 million. One of the most significant benefits of HCI is that it scales incredibly well and low. You’d typically want multiple nodes for reasons like backup, redundancy and high availability. But theoretically, it’s possible to scale down to a single node.

Given that HCI runs on normal hardware, it’s also possible for you to optimize your nodes for the workload you need. If your edge computing use case is to provide a cache for users, then you’d likely need more storage. However, if you’re implementing edge workers that need to execute small scripts, you’re more likely to need processing power and memory. With HCI, you can adapt the implementation to your needs.

Migrating to a Hybrid Cloud Model

Over the past decade, the cloud has gotten more and more popular. Many companies move to the cloud and later realize their applications are better suited to run on-premises. You will also find companies that no longer want to run things in their data centers and instead want to move them to the cloud. In both these cases, HCI can be helpful.

If you want to leverage the cloud, HCI can provide a similar user experience on-premise. HCI is sometimes described as a “cloud in a box” because it can offer similar services one would expect in a public cloud. Examples of this include a consistent API for allocating compute resources dynamically, load balancers and storage services. Having a similar platform is a good foundation for being able to move applications between the public cloud and on-premise. You can even take advantage of tools like Rancher that can manage cloud infrastructure and on-prem HCI from a single pane of glass.

Modernization strategy

Many organizations view HCI as an enabler in their modernization processes. However, modernization is quite different from migration.

Modernization focuses on redesigning existing systems and architecture to make the most efficient use of the new environment and its offerings. With its particular focus on simplifying the complex management of data, orchestration and workflows, HCI is perfect for modernization.

HCI enables you to consolidate your complex server architecture with all its storage, compute and network resources into smaller, easy-to-manage nodes. You can easily transform a node from a storage-first resource to a compute-first resource, allowing you to design your infrastructure how you want it while retaining simplicity.

Modern HCI solutions like Harvester can help you to run your virtualized and containerized workloads side by side, simplifying the operational and management components of infrastructure management while also providing the capabilities to manage workloads across distributed environments. Regarding automation, Harvester provides a unique approach by using cloud native APIs. This allows the user to automate using the same tools they would use to manage cloud native applications. Not switching between two “toolboxes” can increase product development velocity and decrease the overhead of managing complex systems. That means users of this approach get their product to market sooner and with less cost.

Virtual Desktop Infrastructure (VDI)

Many organizations maintain fleets of virtual desktops that enable their employees to work remotely while maintaining standards of security and performance. Virtual desktops are desktop environments that are not limited to the hardware they’re hosted in; they can be accessed remotely via the use of software. Organizations prefer them over hardware since they’re easy to provision, scale, and destroy on demand.

Since compute and storage are two strongly connected and important resources in virtual desktops, HCI can easily manage virtual desktops. HCI’s enhanced reliability provides VDI with increased fault tolerance and efficient capacity consumption. HCI also helps cut down costs for VDI as there is no need for separate storage arrays, dedicated storage networks, and related hardware.

Remote office/Branch office

A remote office/branch office (ROBO) is one of the best reasons for using HCI. In case you’re not familiar, it’s typical for big enterprises to have a headquarters where they host their data and internal applications. Then the ROBOs will either have a direct connection to the headquarters to access the data and applications or have a replica in their own location. In both cases, you will introduce more management and maintenance and other factors, such as latency.

With HCI, you can spin up a few servers in the ROBOs and add them to an HCI cluster. Now, you’re managing all your infrastructure, even the infrastructure in remote locations, through a single interface. Not only can this result in a better experience for the employees, but depending on how much customer interaction they have, it can result in a better customer experience.

In addition, with HCI, you’re likely to lower your total cost of ownership. While you would typically have to put up an entire rack of hardware in a ROBO, you’re now expected to accomplish the same with just a few servers.

Conclusion

After reading this article, you now know more about how HCI can be used to support a variety of use cases, and hopefully, you’ve come up with a few use cases yourself. This is just the beginning of how HCI can be used. Over the next decade or two, HCI will continue to play an important role in any infrastructure strategy, as it can be used in both on-premises data centers and the public cloud. The fact that it uses commodity x86 systems to run makes it suitable for many different use cases.

If you’re ready to start using HCI for yourself, take a look at Harvester. Harvester is a solution developed by SUSE, built for bare metal servers. It uses enterprise-grade technologies, such as KubernetesKubeVirt and Longhorn.

What’s Next:

Want to learn more about how Harvester and Rancher are helping enterprises modernize their stack speed? Sign up here to join our Global Online Meetup: Harvester on October 26th, 2022, at 11 AM EST.

SUSE Linux Enterprise Server ‘Leader’ in Virtualization Software

Thursday, 14 July, 2022

Markus Noga, General Manager, Business-critical Linux at SUSE 

 

I’m excited to share that SLES was recognized by G2, the world’s largest and most trusted tech marketplace, as the Leader in its Server Virtualization Software category and the High Performer in the Infrastructure-as-a-Service category. 

Linux with virtualization and IaaS is the foundation of cloud computing. I am proud of our engineering and that SLES is the perfect guest, optimized for and supporting all leading hypervisor technologies and cloud platforms. 

 

 

According to Gartner, the worldwide IaaS Public Cloud Services Market Grew 41.4% in 2021. As our customers continue their cloud native and digital transformation journey, they require solutions that allow them to effectively manage at scale and drive growth across their business. To do this, they have found SUSE solutions to offer the highest levels of security, availability and performance.

 

I am pleased to share a few testimonials from our customers:

“VMware and SUSE Linux Enterprise Server work very well together. When we originally went down the path of virtualization, SUSE Linux Enterprise Server was the first distribution to include drivers for our chosen virtualization technology as standard, which made things much easier. In general terms, we find that SUSE is often ahead of the other Linux vendors in introducing support for new technologies such as new file systems, for example. This means we can get the benefits of cutting-edge technology without the usual risks of being an early adopter.” 

Steven Mertens, Global Service Lead In­frastructure, NGA Human Resources 

 

“Another thing that attracted us to SLES for SAP Applications is the fact that it has been specifically optimized to run on Microsoft Azure. There are ready-to-use cloud images of SLES for SAP Applications available on Microsoft Azure, which accelerates installation and deployment.” 

Hinrich Mielke, SAP Director, Alegri International Group 

 

While SLES is leading the way, it doesn’t stop there, nor does our engineering work. For customers containerizing their workloads, SUSE Rancher is the most interoperable Kubernetes-management solution across public cloud, edge and on-premises. It works well with SLES and leading 3rd party Linux distributions.  

Separately, Harvester allows customers to unify their virtual machine and container workloads alongside Kubernetes clusters, working seamlessly with SUSE Rancher.  

Together, SUSE Linux Enterprise Server, SUSE Rancher, and Harvester help customers achieve unparalleled security, scalability, resilience and efficiency for all their infrastructure operations. 

 

Watch out for more exciting news to come and see how you can innovate with SUSE. 

 

At SUSECON Digital 2022 in June, we launched the latest release of our Linux code base, SUSE Linux Enterprise 15 Service Pack 4 (SLE 15 SP4), which provides you with the advantages of using one of the world’s most secure enterprise Linux platforms. You can re-visit my SUSECON keynote, which provides a comprehensive overview, including several interviews with our partners and customers. 

Harvesting the Benefits of Cloud-Native Hyperconvergence

Wednesday, 13 July, 2022

The logical progression from the virtualization of servers and storage in VSANs was hyperconvergence. By abstracting the three elements of storage, compute, and networking, data centers were promised limitless infrastructure control. That promised ideal was in keeping with the aims of hyperscale operators needing to grow to meet increased demand and that had to modernize their infrastructure to stay agile. Hyperconverged infrastructure (HCI) offered elasticity and scalability on a per-use basis for multiple clients, each of whom could deploy multiple applications and services.

There are clear caveats in the HCI world: limitless control is all well and good, but infrastructure details like lack of local storage and slow networking hardware restricting I/O would always define the hard limits on what is possible. Furthermore, there are some strictures emplaced by HCI vendors that limit the flavour of hypervisor or constrain hardware choices to approved kits. Worries around vendor lock-in surround the black-box nature of HCI-in-a-box appliances, too.

The elephant in the room for hyperconverged infrastructures is indubitably cloud. It’s something of a cliché in the technology landscape to mention the speed at which tech develops, but cloud-native technologies like Kubernetes are showing their capabilities and future potential in the cloud, the data center, and at the edge. The concept of HCI was presented first and foremost as a data center technology. It was clearly the sole remit, at the time, of the very large organization with its own facilities. Those facilities are effectively closed loops with limits created by physical resources.

Today, cloud facilities are available from hyperscalers at attractive prices to a much broader market. It is forecasted that the market for HCI solutions will grow significantly over the next few years, with year-on-year growth at just under 30%. Vendors are selling cheap(er) appliances and lower license tiers to try and mop up the midmarket, and hyperconvergence technologies are beginning to work with hybrid and multi-cloud topologies. The latter trend is demand-led. After all, if an IT team wants to consolidate its stack for efficiency and easy management, any consolidation must be all-encompassing and include local hardware, containers, multiple clouds, and edge installations. That ability also implies inherent elasticity, and by proxy, a degree of future-proofing baked in.

The cloud-native technologies around containers are well-beyond flash-in-the-pan status. The CNCF (Cloud Native Computing Foundation) Annual Survey for 2021 shows that containers and Kubernetes have gone mainstream. 96% of organizations are either using or evaluating Kubernetes. In addition, 93% of respondents are currently using, or planning to use, containers in production. Portable, scalable and platform-agnostic, containers are the natural next evolution in virtualization. CI/CD workflows are happening, increasingly, with microservices at their core.

So, what of hyperconvergence in these evolving computing environments? How can HCI solutions handle modern cloud-native workloads alongside full-blown virtual machines (VMs) across a distributed infrastructure. It can be done with “traditional” hyperconvergence, but the solution will be proprietary incurring steep cost.

Last year, SUSE launched Harvester, a 100% free-to-use, open source modern hyperconverged infrastructure solution that is built on a foundation of cloud native solutions including Kubernetes, Longhorn and Kubevirt. Built on top of Kubernetes, Harvester bridges the gap between traditional HCI software and the modern cloud-native ecosystem. It unifies your VMs with cloud-native workloads and provides organizations a single point of creation, monitoring, and control of an entire compute-storage-network stack. Since containers may run anywhere, from SOC ARM boards up to supercomputing clusters, Harvester is perfect for organizations with workloads spread over data centers, public clouds, and edge locations. Its small footprint makes it a perfect fit for edge scenarios and when you combine it with SUSE Rancher, you can centrally manage all your VMs and container workloads across all your edge locations.

VMs, containers, and HCI are critical technologies for extending IT service to new locations. Harvester represents how organizations can unify them and deploy HCI without proprietary closed solutions, using enterprise-grade open-source software that slots right into a modern cloud-native CI/CD pipeline.

To learn more about Harvester, we’ve provided the comprehensive report for you here.

 

About the Author

 

Vishal Ghariwala is the Chief Technology Officer for the APJ and Greater China regions for SUSE, a global leader in true open source solutions. In this capacity, he engages with customer and partner executives across the region, and is responsible for growing SUSE’s mindshare by being the executive technical voice to the market, press, and analysts. He also has a global charter with the SUSE Office of the CTO to assess relevant industry, market and technology trends and identify opportunities aligned with the company’s strategy.

Prior to joining SUSE, Vishal was the Director for Cloud Native Applications at Red Hat where he led a team of senior technologists responsible for driving the growth and adoption of the Red Hat OpenShift, API Management, Integration and Business Automation portfolios across the Asia Pacific region.

Vishal has over 20 years of experience in the Software industry and holds a Bachelor’s Degree in Electrical and Electronic Engineering from the Nanyang Technological University in Singapore.

Vishal is here on LinkedIn: https://www.linkedin.com/in/vishalghariwala/

Deploying Multicluster Day 2 Operations with SUSE Rancher, Fleet, and Kasten K10

Monday, 11 July, 2022

You’ve probably heard of Veeam, which IDC just named as tied for first in data replication and protection market share, and part of that momentum and market share is coming from the Veeam cloud native, Kubernetes-focused offering, Veeam Kasten K10.

The Veeam team minding Kasten has been working closely with the SUSE team minding all things cloud native here (think SUSE Rancher, Longhorn, K3s, Harvester, etc.) and recently they sat down with Bastian Hofmann, Field Engineer for Kubernetes at SUSE, to get a walk-through on some of the challenges around multi-cluster deployment, app management, and how “Fleet” from SUSE (which comes preinstalled with SUSE Rancher) can maintain an application’s deployment and lifecycle across hundreds or even many thousands of clusters.

Check out this Veeam blog by Adam Bergh, Cloud Native Technical Partnerships, which features an excellent demo from Bastian showing the deployment of Kasten K10 seamlessly and consistently across a large set of Kubernetes clusters — with ease, with Fleet.

This is a great introduction to how Fleet enables large, distributed organizations to deploy a world-class tool, like Kasten K10, across an expansive, diverse and dynamic Kubernetes environment. Check it out and get your scale on!

Veeam blog: Deploying Multicluster Day 2 Operations with SUSE Rancher, Fleet, and Kasten K10

 

 

 

A Path to Legacy Application Modernization Through Kubernetes

Wednesday, 6 July, 2022

These legacy applications may have multiple services bundled into the same deployment unit without a logical grouping. They’re challenging to maintain since changes to one part of the application require changing other tightly coupled parts, making it harder to add or modify features. Scaling such applications is also tricky because to do so requires adding more hardware instances connected to load balancers. This takes a lot of manual effort and is prone to errors.

Modernizing a legacy application requires you to visualize the architecture from a brand-new perspective, redesigning it to support horizontal scaling, high availability and code maintainability. This article explains how to modernize legacy applications using Kubernetes as the foundation and suggests three tools to make the process easier.

Using Kubernetes to modernize legacy applications

A legacy application can only meet a modern-day application’s scalability and availability requirements if it’s redesigned as a collection of lightweight, independent services.

Another critical part of modern application architecture is the infrastructure. Adding more server resources to scale individual services can lead to a large overhead that you can’t automate, which is where containers can help. Containers are self-contained, lightweight packages that include everything needed for a service to run. Combine this with a cluster of hardware instances, and you have an infrastructure platform where you can deploy and scale the application runtime environment independently.

Kubernetes can create a scalable and highly available infrastructure platform using container clusters. Moving legacy applications from physical or virtual machines to Kubernetes-hosted containers offers many advantages, including the flexibility to use on-premises and multi-cloud environments, automated container scheduling and load balancing, self-healing capability, and easy scalability.

Organizations generally adopt one of two approaches to deploy legacy applications on Kubernetes: using virtual machines and redesigning the application.

Using virtual machines

A monolith application’s code and dependencies are embedded in a virtual machine (VM) so that images of the VM can run on Kubernetes. Frameworks like Rancher provide a one-click solution to run applications this way. The disadvantage is that the monolith remains unchanged, which doesn’t achieve the fundamental principle of using lightweight container images. It is also possible to run part of the application in VMs and containerize the less complex ones. This hybrid approach helps to break down the monolith to a smaller extent without huge effort in refactoring the application. Tools like Harvester can help while managing the integration in this hybrid approach.

Redesigning the application

Redesigning a monolithic application to support container-based deployment is a challenging task that involves separating the application’s modules and recreating them as stateless and stateful services. Containers, by nature, are stateless and require additional mechanisms to handle the storage of state information. It’s common to use the distributed storage of the container orchestration cluster or third-party services for such persistence.

Organizations are more likely to adopt the first approach when the legacy application needs to move to a Kubernetes-based solution as soon as possible. This way, they can have a Kubernetes-based solution running quickly with less business impact and then slowly move to a completely redesigned application. Although Kubernetes migration has its challenges, some tools can simplify this process. The following are three such solutions.

Rancher

Rancher provides a complete container management platform for Kubernetes, giving you the tools to successfully run Kubernetes anywhere. It’s designed to simplify the operational challenges of running multiple Kubernetes clusters across different infrastructure environments. Rancher provides developers with a complete Kubernetes environment, irrespective of the backend, including centralized authentication, access control and observability features:

  • Unified UI: Most organizations have multiple Kubernetes clusters. DevOps engineers can sometimes face challenges when manually provisioning, managing, monitoring and securing thousands of cluster nodes while establishing compliance. Rancher lets engineers manage all these clusters from a single dashboard.
  • Multi-environment deployment: Rancher helps you create Kubernetes clusters across multiple infrastructure environments like on-premises data centers, public clouds and edge locations without needing to know the nuances of each environment.
  • App catalog: The Rancher app catalog offers different application templates. You can easily roll out complex application stacks on top of Kubernetes with the click of a button. One example is Longhorn, a distributed storage mechanism to help store state information.
  • Security policies and role-based access control: Rancher provides a centralized authentication mechanism and role-based access control (RBAC) for all managed clusters. You can also create pod-level security policies.
  • Monitoring and alerts: Rancher offers cluster monitoring facilities and the ability to generate alerts based on specific conditions. It can help transport Kubernetes logs to external aggregators.

Harvester

Harvester is an open source, hyperconverged infrastructure solution. It combines KubeVirt, a virtual machine add-on, and Longhorn, a cloud native, distributed block storage add-on along with many other cloud native open source frameworks. Additionally, Harvester is built on Kubernetes itself.

Harvester offers the following benefits to your Kubernetes cluster:

  • Support for VM workloads: Harvester enables you to run VM workloads on Kubernetes. Running monolithic applications this way helps you quickly migrate your legacy applications without the need for complex cluster configurations.
  • Cost-effective storage: Harvester uses directly connected storage drives instead of external SANs or cloud-based block storage. This helps significantly reduce costs.
  • Monitoring features: Harvester comes with Prometheus, an open source monitoring solution supporting time series data. Additionally, Grafana, an interactive visualization platform, is a built-in integration of Harvester. This means that users can see VM or Kubernetes cluster metrics from the Harvester UI.
  • Rancher integration: Harvester comes integrated with Rancher by default, so you can manage multiple Harvester clusters from the Rancher management UI. It also integrates with Rancher’s centralized authentication and RBAC.

Longhorn

Longhorn is a distributed cloud storage solution for Kubernetes. It’s an open source, cloud native project originally developed by Rancher Labs, and it integrates with the Kubernetes persistent volume API. It helps organizations use a low-cost persistent storage mechanism for saving container state information without relying on cloud-based object storage or expensive storage arrays. Since it’s deployed on Kubernetes, Longhorn can be used with any storage infrastructure.

Longhorn offers the following advantages:

  • High availability: Longhorn’s microservice-based architecture and lightweight nature make it a highly available service. Its storage engine only needs to manage a single volume, dramatically simplifying the design of storage controllers. If there’s a crash, only the volume served by that engine is affected. The Longhorn engine is lightweight enough to support as many as 10,000 instances.
  • Incremental snapshots and backups: Longhorn’s UI allows engineers to create scheduled jobs for automatic snapshots and backups. It’s possible to execute these jobs even when a volume is detached. There’s also an adequate provision to prevent existing data from being overwritten by new data.
  • Ease of use: Longhorn comes with an intuitive dashboard that provides information about volume status, available storage and node status. The UI also helps configure nodes, set up backups and change operational settings.
  • Ease of deployment: Setting up and deploying Longhorn just requires a single click from the Rancher marketplace. It’s a simple process, even from the command-line interface, because it involves running only certain commands. Longhorn’s implementation is based on the container storage interface (CSI) as a CSI plug-in.
  • Disaster recovery: Longhorn supports creating disaster recovery (DR) volumes in separate Kubernetes clusters. When the primary cluster fails, it can fail over to the DR volume. Engineers can configure recovery time and point objectives when setting up that volume.
  • Security: Longhorn supports data encryption at rest and in motion. It uses Kubernetes secret storage for storing the encryption keys. By default, backups of encrypted volumes are also encrypted.
  • Cost-effectiveness: Being open source and easily maintainable, Longhorn provides a cost-effective alternative to the cloud or other proprietary services.

Conclusion

Modernizing legacy applications often involves converting them to containerized microservice-based architecture. Kubernetes provides an excellent solution for such scenarios, with its highly scalable and available container clusters.

The journey to Kubernetes-hosted, microservice-based architecture has its challenges. As you saw in this article, solutions are available to make this journey simpler.

SUSE is a pioneer in value-added tools for the Kubernetes ecosystem. SUSE Rancher is a powerful Kubernetes cluster management solution. Longhorn provides a storage add-on for Kubernetes and Harvester is the next generation of open source hyperconverged infrastructure solutions designed for modern cloud native environments.

Innovation without Disruption: Introducing SUSE Linux Enterprise 15 SP4 and Agility

Monday, 20 June, 2022

In a production environment, where applications must be flexible at deployment, running and rolling out times, it is important to consider agility as one of the main points to consider when building or evolving your platform.

SUSE Linux Enterprise Server is a modern, modular operating system for both multimodal and traditional IT. In this article, I’ll provide a high-level overview of features, capabilities and limitations of SUSE Linux Enterprise Server 15 SP4 and highlight important product updates.SUSE Linux Enterprise Server leverages your workloads to provide security, agility and resiliency to your ecosystem. In this article, I am going to cover agility. SUSE Linux Enterprise Server also now supports KubeVirt. 

Regarding agility, some relevant offerings from SUSE include:

  • Base Container Images (BCI): BCI brings all the SLES (SUSE Linux Enterprise Server) experience into container workloads. It builds your applications in a secure, multi-stage and performance environment.
  • Harvester HCI (HyperConverged Infrastructure) (KubeVirt): Harvester is a modern HCI solution that bridges the gap between the HCI software and the cloud-native ecosystem using technologies like Longhorn and KubeVirt to provide storage and virtualization capabilities.  It connects multiple interfaces to the Virtual Machines and provides isolation capabilities to the architecture. With Harvester and Kubernetes, you no longer need to manage traditional HCI infrastructure and cloud-native separately.
  • SUSE Manager HUB: Scale your infrastructure and manage thousands of servers through a hub implementation of SUSE Manager.

Why SLE BCI?

While Alpine is the most used base image, when it comes to an enterprise use case, you should consider more variables before making a choice. Here are some of the reasons why SLE BCI (which I will shorten to simply BCI for now) is potentially a great fit.

  • Maximum security: When it comes to developing applications, the world is moving and working in a cloud native ecosystem because of its emphasis on flexibility, agility and cost effectiveness. However, application security is often an afterthought in the initial stages of developing a new app. If developers do not choose their base image wisely, their application could be affected by security vulnerabilities, or it simply will not pass the required security certifications. When developing the SLE family of products, SUSE worked to ensure they meet the highest levels of security and compliance, including FIPS (Federal Information Processing Standard), EAL4+, FSTEC, USG, CIS (Center for Internet Security) and DISA/STIG. All this work flows downstream to SLE BCI, making it one of the industry’s most secure base images for enterprise developers or independent software vendors to leverage.
  • Available images: SUSE provides two sets of images through its registry, the base ones (bci-base, bci-minimal, bci-micro, bci-init) and the language-specific ones (Golang, rust, openJDK, python, ruby, and more).  Check out the registry!
  • Supportability: One of the key factors that made me give BCI a try is the supportability matrix. So far, if I must test my application locally or for a Proof of Concept, I could use an Alpine or a specific language/runtime image. But when it comes to creating an enterprise-grade application, sooner than later, I will need to migrate to a supported one. SUSE fully supports bci-base. Customers with an active subscription agreement can open support cases or request new features through the official channels.Something else that captured my attention: the supportability matrix of BCI has no bounds with the underlying host where the application is running, which allows more flexibility and mixed ecosystems while keeping your application covered by the SUSE support umbrella.

SUSE Manager hub

Ecosystems need to scale as required. Managing servers in a lab is not comparable to managing different production environments where not only is managing servers important, but so is complying with security standards and maintaining health and ensuring compliance.  When it comes to managing an environment, whether it is pure SUSE or a mixed environment, there are some aspects we need to take into consideration:

  • Compliance: through the templates and automation of new deployments, every new element or operating system would ensure that it is following the compliance definition for the ecosystem and the different environments defined.
  • Security: An agile environment requires new features to be tested and new discovered vulnerabilities to be patched. Your ecosystem is as vulnerable as the weakest element you have deployed. With a centralized path, configuration, and package management, you will be aware of the vulnerabilities affecting your entire ecosystem and design the update or deployment strategy.
  • Health: as part of day 2 operations, SUSE Manager centralizes the management of the risk of business disruptions and monitors downtime.
  • Scalability: with new elements coming to the environment, it is also important to manage the infrastructure in a supported, feasible and performant manner. SUSE provides scalability up to 1 million clients in a hub-based architecture. Multiple SUSE Managers can be managed from a single hub node, aggregating clients and attaching them to a specific proxy server that is also managed by its own manager.  This allows you to have a centralized reporting database that is helpful since you do not have to look on each server to get the monitoring of a specific environment or subset of clients. In other words, everything is managed from a centralized hub. This architecture adds some features for complex environments or specific management requirements for compliance.  For example, for multi-tenancy you can use different managers to isolate server configurations. Check out the SUSE Manager product page for more information.
  • Monitoring: Whether SUSE Manager is installed on a hub or standalone, each environment needs to be reported where you can see the relevant information you are looking for in a single glance. Ecosystems need to be agile and adaptable, deploying new servers, decommissioning the ones you no longer need and being aware of new elements added even from various sources. SUSE Manager can deploy multiple probes that you can configure to look after the most critical elements or the most relevant events for you.SUSE Manager uses Prometheus to monitor the elements and Grafana for the dashboards. You are not restricted to what comes with the product; instead, you can create customized dashboards to organize and show that information in a way that is more relevant. In a scenario where the monitoring comes from third-party software, SUSE Manager Monitoring can pull data from a single or multiple external sources and use it.No matter how you evolve your ecosystem, whether you do it through the deployment templates or use external deployers, SUSE Manager, through the Service Discovery features, can look for potential monitoring targets that add dynamic definitions on a living environment.

Trento

SAP environments are complex systems designed to accomplish complex challenges. They consist of several pieces including databases, high availability systems, applications servers and workloads. No matter where you deploy, on premise or in the cloud, all those pieces need to integrate with each other with their own setup processes and configurations. This implies that SAP environments are hard to deploy, configure and manage. Usually, the initial deployment and configuration of SAP requires enterprise admins and third-party integrators to reference SAP notes. It is a time- and resource-consuming task.

SAP setup process consists of several manual steps and configurations to deploy and maintain the software successfully. With so many elements to configure and handle, there are situations where misconfigurations and human errors lead to unexpected downtime.SUSE and SAP have been working together for the last 20 years to build up a stable integration between SAP and SUSE Linux Enterprise Server for SAP Applications, creating an in-depth operating system designed and certified for running SAP systems, databases and workloads.

Deploying and maintaining SAP environments is not a “fire and forget.” It requires maintenance and monitoring the status of the hosts, systems, databases and high availability pieces. To do that, you have to look for someone who can handle this as it is an extremely specific system. This is where Trento comes to the table. Trento is a containerized solution that provides a single console view to discover and manage all SAP systems components (databases, hosts, HA, databases and HANA Databases). Trento is the way to safeguard SAP ecosystems. The user will be notified when a bad configuration or a missing setup step is detected on any systems, recommendations on reducing time-consuming assets (like performing daily and manual revisions of the systems) or digging into the SAP documentation looking for a specific asset. Trento is the centralized piece of SAP infrastructure where the user can see the status of the ecosystem in a single dashboard, get recommendations on what is the best configuration for a specific environment and ensure the SAP ecosystem is deployed and running following best practices. Leverage SUSE’s expertise with SAP. Within SUSE Linux Enterprise Server for SAP Applications, Trento is a first-class citizen that can leverage how well the operating system and the SAP ecosystem work together.

Conclusion

SUSE provides a stack to manage your infrastructure components, with a focus on agility without renouncing stability or security. This stack includes SUSE Manager, BCI images, Trento, and Harvester.  SUSE can manage multi-vendor ecosystems where SYSE systems and other operating systems are managed, patched and analyzed.  SUSE solutions keep your entire environment in compliance with the highest security standards.To learn more, go to Business Critical Linux, SUSE Security, SUSE Linux Enterprise Base Container Images, SUSE Manager, and/or SUSE Linux Enterprise Server.

Thanks for reading!

 

SUSECON is Back! (BYOB)

Friday, 29 April, 2022

SUSECON is back! And once again (hopefully for the final time!) it will be a virtual conference. While many of us would love to be back together in person, there are some real benefits to hosting the conference virtually. One of these benefits is that there are no artificial limits on content, such as hotel room space, break times, etc. In a virtual conference, we can offer virtually unlimited learning possibilities!

SUSECON Digital 2022 Sessions

We just announced our SUSECON Digital 2022 Session Catalog, and it will blow you away! Last year had a lot of amazing content, but this year we have really outdone ourselves. Most of the content is listed now, but more will be added in the next couple of weeks. In total, we will have more than 200 sessions and demos in this year’s digital conference! That means 20% more Linux-related sessions, 20% more Edge sessions, 20% more demos and nearly 40% more Kubernetes sessions than last year!

In the mix with our extensive list of Technology breakout sessions this year you’ll find a couple of new arrivals for SUSECON Digital:

  • Return of the Hands-On Labs – virtually!

    • Hands-On Labs are a key staple of in-person SUSECON events. We had to take a hiatus for the last couple of years due to Covid, but this year we will bring back a limited number of opportunities to have a hands-on experience with the product software! Expert instructors will walk you through the following topics in a virtual classroom, with personal instruction and attention at your pace. Attendance in these Labs is limited, so make sure you reserve your place on May 10 when the SUSECON Digital Session Registration goes live!
      • Introduction to Harvester HCI
      • NeuVector Basic Deployment
      • Reduce downtime with SUSE Linux Enterprise High Availability
  • Increased focus on business-level content.

    • While SUSECON is well known for outstanding technical content, we consistently try to provide content that helps business decision makers understand the value of our open source solutions. This year we are introducing topics that are more than simplified product overviews by discussing the real business drivers that represent challenges business leaders face every day, including topics like cost optimization, digital sustainability, Green IT, secure software supply chains and digital transformation.

And there’s more… as always!

One of our core commitments at SUSE is to always surprise and delight our customers, and at SUSECON we continue to do just that. As with our in-person events, we invite you to come for the content, then stay for the experience! Besides the amazing session content, see what else SUSECON Digital 2022 will have to offer:

  • Inspiring Keynotes

    • As always, the conference will be headlined by our executive team delivering inspiring messages and laying out our company direction for the future. Our CEO, Melissa Di Donato, will lead off the keynote series this year, followed by the General Managers of our Business-critical Linux, Enterprise Container Management and Edge Solutions businesses. Be sure to read Melissa’s blog for more information about these.
    • This year we are also excited to announce the return of the Technology Demo Keynote (remember Demopalooza?!?) with our CTO, Dr. Thomas Di Giacomo. Dr.T and his team of experts will demo new solutions that underscore both SUSE’s commitment to full stack security as well as our dedication to innovation.
  • Networking with Experts

    • Have you ever had burning questions, but you just didn’t know who to ask or how to reach them if you did know? SUSECON Digital 2022 will feature a robust networking environment where attendees can meet and mingle with hundreds of SUSE employees. Meet the engineers working on your favorite project. Talk to our Product Managers and give them your input on new solutions. Our presenters will be available to chat during most session presentation times, and then will available for small group discussions at scheduled times throughout the event. I encourage you to take full advantage of the networking tools to get to know us better!
  • Leisure time

    • Speaking of getting to know us better – be sure to check out something else new this year: SUSEDoes. In this virtual arena, a few brave SUSE employees will invite you to find out more about unconventional topics at a personal level. Ranging from cake baking to cold water swimming to building off-the-grid cabins, this will be a fun way to get to know us better!
    • When your brain just can’t absorb any more information – take a break and have some fun! The SUSECON Digital 2022 portal will have some retro games and light-hearted entertainment features as well to provide some much-needed downtime.

There are still a lot of great reasons to attend a virtual conference. So if you only do one this year, SUSECON Digital 2022 should be that one! Register today and we’ll see you June 7-9!

ELEVEN REASONS CUSTOMERS LOVE RANCHER

Wednesday, 9 February, 2022

We interviewed a number of existing SUSE Rancher customers and asked them what they loved about it. The findings are nothing short of astonishing! We are posting them here since they may help you with your container/cloud native journey.


1.  Customers love our free training – Rancher Academy. It was mentioned by almost all participants. Kubernetes management has some intricacies so starting with training is always a great idea. We have you covered.

2.  SUSE Rancher 2.6 came out with many new features – the new web interface is beautiful and intuitive. Customers love it for the looks and the ease of use it offers. The point and click UI lowers the learning curve considerably.

3.  SUSE Rancher plugs in painlessly with your existing Kubernetes clusters, Active Directory services for authentication etc., existing contracts with cloud providers and so on. Rancher is a team player!

4. SUSE Rancher is truly heterogeneous in terms of your choice of operating system – it can be deployed on SUSE Linux Enterprise Server, Ubuntu, CentOS, Red Hat etc. Here is the full support matrix.

5.  SUSE Rancher handles Kubernetes management complexity better than other competing solutions. We spoke to a tech admin who was struggling with one of our competitors. They then tested with Rancher and things just fell into place.

6. Our per node pricing is about as simple as it gets. No vCore/vCPU metrics plus extra terms and even more confusion. Price is per managed node – that’s it.

7. Customers have loved Rancher from the beginning. After the merger with SUSE, a banking customer we spoke to, felt even better about it since it became a part of a larger support organization. Support, after all, is the bread and butter of any open-source business.

8. What is more, after the merger with SUSE, Rancher got huge boost in engineering resources. Projects like Harvester became products and new ones are getting launched and proving value to enterprises. Customers are honestly pleased to see that we are restlessly continuing along the path of innovation.

9. Customers chose SUSE Rancher because there already was some Rancher usage/experience within their organization. It is simply a cool product and dev teams love it!

10. Customers love the openness of Rancher and its closeness to pure Kubernetes. They shared with us that other vendors have their own Kubernetes version which, in their words, is a drawback!

11. One of our customers shared that Rancher was chosen for being an all-in-one solution – solved a number of connection issues that came with downtime. It made troubleshooting so much easier for them.

If you are still on the fence with your choice of Kubernetes management – start with the Rancher Academy. Get a few folks trained and seek their feedback later. We’ll be there to help you along the way.