Container Management – Decoding Kubernetes Management Platforms Part 2

Friday, 12 May, 2023

Non-Hosted KMPs

This article is the second in a series covering Kubernetes Management Platforms (KMPs). In the first article, we analyzed hosted KMPs, exploring their potential benefits and customer base. This blog will examine non-hosted KMPs and the organizational customer profiles that can benefit the most from this solution.

After the first article, you may think that hosted KMPs are the way to go, but there are many things to consider before deciding. In this blog post, we want to help you to choose the best option for your use case and needs, so let’s start analyzing the pros and cons for each one.

Before jumping on the pros and cons of non-hosted KMPs, let’s give some context about the market and why non-hosted KMPs are the preferred option for most prominent organizations worldwide. Some of the most widely used KMPs in the market include Rancher Prime and Red Hat Advanced Cluster Management. These platforms are known for simplifying the deployment, scaling, and management of Kubernetes clusters and offering a centralized control plane for managing clusters at scale and easy integration with other technologies. Additionally, these platforms provide security features and automatic updates to ensure that clusters are highly available and secure.

However, the main reason for their popularity among organizations is their level of control and adaptability. Despite their differences, these platforms give organizations full control over their clusters, security, configuration, applications, and any other Kubernetes-related matter and adapt to any architecture used within the organization. This means you have the power and the responsibility to manage the platform with all that implies.

You can consult the Rancher by SUSE buyer’s guide If you are eager to know more about the differences between these solutions and others.

Advantages of non-hosted KMPs:

  • Greater flexibility:
    • Non-hosted platforms offer more flexibility in terms of customization and configuration options, which can benefit complex environments.
  • Hybrid cloud or multi-cloud:
    • Non-hosted KMPs have an on-premises focus without crippling the possibilities to use and expand your environments using public cloud providers and managed services.
  • EDGE architectures:
    • Solutions like Rancher Prime are developed to integrate EDGE deployments into your management layer without disrupting your tools and processes.
  • More control and security:
    • In a non-hosted Kubernetes management platform, your operators control what’s happening and decide which security measures and tools are better for your applications and your concrete requirements. It’s the way to go for industries that require strict compliance or are highly regulated.
  • Cost-effective:
    • Non-hosted platforms are more cost-effective than hosted platforms, especially for large-scale deployments.
  • Community:
    • Kubernetes management platforms like Rancher are open source and have built a community over the years. Open source communities have proven crucial in driving innovation and helping projects become global solutions, like Kubernetes.

Disadvantages of non-hosted KMPs:

  • More complex:
    • Non-hosted platforms may be more challenging to set up and manage than hosted platforms, which can require more technical expertise.
  • Responsibility:
    • Users are responsible for the security, configuration, maintenance, data security, and updates of the Kubernetes cluster, which can be time-consuming and require high expertise and more resources.

The user profiles

The advantages of non-hosted KMPs require, in most cases, a team of operators and SREs. Not all organizations have the resources to manage Kubernetes, even having a KMP to simplify their job and ease operations.

  • Large enterprises:
    • These organizations typically have a dedicated IT infrastructure and IT staff and may prefer to manage their KMPs in-house to maintain full control and visibility over their cloud infrastructure.
  • Companies with compliance requirements:
    • Some companies may have specific regulatory or data privacy requirements that cannot be met by hosted KMPs, making non-hosted KMPs a more suitable option.
  • DevOps teams:
    • DevOps teams highly skilled in cloud infrastructure and Kubernetes may prefer the added control and customization options offered by non-hosted KMPs.
  • Organizations with multiple cloud deployments:
    • Companies with numerous cloud deployments may find it more cost-effective to manage their KMPs in-house instead of paying for multiple hosted KMPs from different providers.

 

Conclusion

Non-hosted platforms require higher expertise, but they also offer greater flexibility in terms of use cases, such as hybrid cloud, EDGE, and on-premises deployments. They can also accommodate multi-cloud use cases without a problem. Non-hosted solutions are widely used in the market because they provide almost all the benefits of a hosted solution through automation while offering the advantages of non-hosted solutions.

Choosing the right platform is fundamental to helping your organization adapt and grow quickly to meet your business needs. If you need to scale rapidly and want the support of a highly skilled team, Rancher Prime Hosted may be the solution for you. It includes all the features of Rancher Prime but eliminates the burden of administrative tasks for your operations team.

Enterprises adopting Kubernetes and utilizing Rancher Prime have seen substantial economic benefits, which you can learn more about in Forrester’s ‘Total Economic Impact’ Report on Rancher Prime. 

Container Management – Decoding Kubernetes Management Platforms Part 1

Friday, 12 May, 2023

Hosted KMPs

This is the first article of a series of two covering the advantages and disadvantages of hosted and non-hosted Kubernetes management platforms. First, let’s introduce hosted what is hosted Kubernetes management platform (KMP) and provide a broader view of hosted KMPs.

A hosted Kubernetes management platform is a service provided by a third-party vendor that manages the deployment and operation of Kubernetes clusters for you or helps you to do so. It abstracts away the underlying infrastructure and provides a convenient, user-friendly interface for managing your applications and services running on the cluster. The vendor typically takes care of tasks such as cluster provisioning, scaling, monitoring, and maintenance, freeing you to focus on developing and deploying applications. While the idea may seem appealing, it’s important to carefully assess various factors before making a decision. For instance, we should evaluate the specific environment and applications we’ll be working with, consider the platform’s costs, and explore its capabilities and integrations. It’s worth noting that many hosted KMPs heavily prioritize Kubernetes services on public clouds, which may result in limited capabilities and integrations in on-premises or edge environments.

Organizations may choose hosted Kubernetes management platforms for various reasons, including simplifying the management of complex underlying infrastructure, automatic scaling to meet business needs without additional investment in infrastructure and staff, and access to expert technical support. These benefits make hosted solutions particularly well-suited for startups or growing organizations that may not have the resources to invest in infrastructure and Kubernetes professionals in a concrete moment.

In this blog post series, I want to provide information and perspective to help you to choose the best option for your use case and needs, so let’s start analyzing the pros and cons of hosted KMPs.

Hosted KMPs have multiple advantages, such as:

  • Ease of use: Hosted platforms typically provide a user-friendly interface and are SaaS-based tools, making it easy for users to deploy and manage their Kubernetes clusters.
  • Automatic updates and upgrades: Hosted platforms handle the updates and upgrades of the Kubernetes cluster, which can save operators time and effort.
  • Expertise: Vendors that provide hosted Kubernetes management platforms have expertise in deploying and operating Kubernetes clusters and can provide support and troubleshooting assistance to their customers.
  • Scalability: Hosted platforms can automatically scale the underlying infrastructure, making it easier to accommodate growth in the number of applications and users.
  • Simplified security: Hosted platforms typically provide out-of-the-box basic security features such as built-in authentication and authorization, network segmentation, CVE scanning, and automatic backups.
  • Focus on application development: With the operational overhead of managing a Kubernetes cluster handled by a third party, you can focus on developing and deploying your applications on the cluster without worrying about infrastructure management.

 

Disadvantages of hosted Kubernetes management platforms:

  • Cost: Hosted platforms are more expensive than non-hosted platforms, especially for large-scale deployments. They are SaaS tools running on hyperscalers. While there are different licensing or subscription models available, in the end, hosted platform providers charge for both their costs and the service they provide. These costs include the cloud provider bill, which can make the overall price of these services more expensive. The pricing for hosted solutions is usually complex to understand, making cost analysis difficult.
  • Limited flexibility: Hosted platforms may have limitations in terms of customization and configuration options compared to non-hosted platforms. Additionally, they may not be well-suited for on-premises environments. As an organization’s resource and capacity needs grow, they may reach the maximum capacity offered by the hosted services provider, potentially limiting further growth.
  • Lack of Community: The hosted Kubernetes platforms or Kubernetes management platforms usually are not open source, or even if part of their code is open source, they don’t have a community behind them.
  • Dependence on the provider: Users may depend on the provider to ensure the platform is available and running smoothly, which can be an issue if the provider experiences an outage or other problems. As they usually run on the public cloud, there are two sources of uncertainty, the public cloud provider infra and the software company providing the service.
  • EDGE Architecture: As stated before, the best option depends on the user’s concrete use case and circumstances. However, you may want smaller deployments (including management) to implement a most distributed architecture in different locations. In that case, the hosted platforms won’t be the best option, but they can be a good fit if you plan a centralized management architecture and they have the capacity.
  • Data Security: Data and who has access to it are always a concern for any organization. When you provide access to a third-party company to your clusters, you still have the responsibility over the data managed by your company, but there is a new source of potential troubles. Many companies have been hacked through third-party companies providing software or services.

 

The user profiles

Once we have reviewed the pros and cons and have introduced the potential benefits of this type of solution are a good moment to elaborate on the different user profiles that would benefit from a hosted KMP service. Here, you’ll find some of them:

  • Startups: Hosted platforms can provide a cost-effective and scalable solution for startups looking to deploy and manage applications on a Kubernetes cluster quickly.
  • Small to medium-sized businesses (SMBs): SMBs can benefit from the expertise and support a hosted platform provides with outsourcing infrastructure management.
  • Developer teams: Hosted platforms can help DevOps teams focus on developing and deploying applications rather than spending time managing the underlying infrastructure and the platform.
  • Heavy public cloud users: Most hosted KMPs focus on Kubernetes-managed services like AKS, EKS or GKE. Organizations who have invested in the public cloud find that managed services fit very well with their strategy.

 

Conclusion

Hosted Kubernetes management platforms are a good option if you are starting with Kubernetes and do not need to manage a large number of clusters and applications. They can also be a good choice when the cost is not a significant concern and you want your operations team to focus on innovation instead of maintenance tasks. However, when security is a high priority, or when EDGE or on-premises deployments are the focus of your IT strategy, there may be better options than hosted services.

At SUSE, we offer Rancher Prime Hosted, which has the same features as Rancher but with a different approach. With Rancher Prime Hosted, you can easily create and manage Kubernetes clusters, streamline your deployment workflows, and monitor the performance of your applications. It also includes built-in security features to help protect your applications from potential threats. In addition, Rancher Prime Hosted provides a user-friendly interface that simplifies the management of your containerized applications and allows you to scale your infrastructure when your business demands it. Whether using a multi-cloud, EDGE, on-premises, or hybrid-cloud strategy, Rancher Prime Hosted can support your needs. By removing the burden of operating your Kubernetes management platform, your teams can focus on getting the most value out of your cloud native investment with a hosted Kubernetes management platform like Rancher Prime Hosted.

SUSE Awarded 16 Badges in G2 Spring 2023 Report

Thursday, 11 May, 2023

Spring is here, and so are the latest G2 Badges! I’m happy to share that G2 has awarded 15 badges to SUSE in its 2023 spring report, plus the overarching ‘Users Love Us’ badge (again). G2, the world’s largest and most trusted tech marketplace, recognized Rancher, SLE Desktop, SLE Real Time, SLES and SUSE Manager as High Performers and Momentum Leaders. G2 also awarded the openSUSE Tumbleweed Linux distribution.

Building off the momentum from our latest badge report, we received Here’s a rundown of all of them, including a newly recognized APJ badge for SLED.

  • Rancher was recognized as an overall High Performer and Easiest Admin for Mid-Market companies
  • SLE Desktop was recognized as a High Performer in the following categories: Small Business, Mid-Market, Enterprise and High Performer Asia Pacific
  • SLE Real Time was recognized as an overall High Performer
  • SLES was recognized as Momentum Leader, High Performer (overall and Mid Market), Leader
  • SUSE Manager was recognized as Best Meets
  • Tumbleweed was recognized as High Performer

Customer testimonials:

Why users love Rancher

“It was pretty simple to set up and very easy to deploy. Very different from other container solutions. When we needed technical support, they solved our problems very quickly in a very short time. It was quite successful in our automation problems.”

Their web GUI simplifies many daunting tasks for users new to Kubernetes.”

“We have been able to introduce a modern application delivery and automate their testing and deployment. Rancher has also allowed us to offer applications to end users that otherwise would be pushed to the “cloud.””

Why users love SLE Real Time

“Although all flavors of Linux are perfect for enterprise-grade DB hosting, SUSE comes on top in terms of flexibility and ease of management. Especially if you are running SAP.”

Why users love SLES (SUSE Linux Enterprise Server)

“It is simple to deploy, configure, and maintain since it has a comprehensive set of system administration, monitoring, and automation tools.”

Why users love SUMA (SUSE Manager)

“Orchestration and management of multiple distributions in a physical datcenter. Eliminating the need to access different OS and install the patches and software updates separately.”

“With SUSE Manager, I can easily manage all operating systems with linux distribution. This leaves me a lot of time. It is very successful on the automation side. Our patch management works never stop. If we have a problem, the suse technical support team can produce a solution immediately.”

Project Snow Cow: A hat-tip to Apple’s MacOS Snow Leopard release that drove the inspiration for Stability, Reliability & Extensibility in Rancher  

Tuesday, 18 April, 2023

Kubernetes has reached an interesting point in its lifecycle where it is now the default choice to run business-critical applications across varied infrastructures, from virtual machines to bare metal and in the cloud. This, combined with the evolving need for a single pane of glass to centralize and manage infrastructure and application deployments, has required IT teams to focus on a stable, reliable and extensible platform that can scale on demand.   

At SUSE, our product direction and strategy are driven by deepening our understanding of our users and customer needs. In a post-Covid-19 world, achieving Kubernetes nirvana became the primary goal for IT teams, focusing on stability, reliability and extensibility driving usage and purchase behavior. To help understand how we solve this problem most effectively, we got back to the basics.   

With the support of our users and our customers, we kicked off ‘Project Snow Cow’ – our hat-tip to the mythological status of Snow Leopard in the Apple community as the catch-all referring to stable software from “the good old days” of Mac and moved forward with building a prioritized delivery plan to make Rancher more stable, reliable, extensible and scalable on demand.   

Project Snow Cow became the bedrock of our v2.7.x releases. Starting with Rancher v2.7.0, we fixed 132 bugs and made over 40+ product changes over the week of Thanksgiving 2022. Rancher v2.7.1 came in next in January 2023 with dedicated security fixes to improve our overall security posture.   

And now Rancher v2.7.2, released in April 2023, took the crown with 204 total resolved issues involving 140+ bugs and 40+ product enhancements, including production-grade GA support for Kubernetes 1.25, AKS 1.25 & GKE 1.25. To facilitate effective usage of K8s 1.25, Rancher v2.7.2 also adds a new custom resource definition (CRD): PSA configuration templates. These templates are pre-defined security configurations that you can apply to RKE and RKE2/K3s clusters out of the box. A lot of goodness is packed into one!  

Building on the success of ‘Project Snow Cow,’ which focused on stability and reliability, our feature teams started adding the desired levels of extensibility with the introduction of UI extensions that can layer independently on top and allow you to scale up and have a single pane of glass management view across all your cloud native tools, from container application development to container security and deployment.   

These UI extensions were first introduced in v2.7.0 and now allow for a true plug-and-play model into the Rancher platform to accommodate policy management, security and audit compliance use cases, among other things. Alongside v2.7.2, we now offer a Kubewarden extension for Rancher that makes it easy to install Kubewarden into a downstream cluster and manage Kubewarden and its OPA-based policies right from within the Rancher Cluster Explorer user interface. You can see how we build extensions in the upcoming Global Online Meetup on May 3, 2023, at 11 am EST. 

Evolving the Rancher Prime Subscription 

In line with Rancher v2.7.2, I am excited to also announce the next iteration of the Rancher Prime Subscription; Rancher Prime 1.1. The subscription allows customers to extend the benefits of the Rancher Platform to now get a trusted, private registry download mechanism for their entire Kubernetes Management stack. This trusted delivery mechanism and our SLA-backed support model ensures upstream changes and disruptions can be insulated for production environments ensuring changes like the recent deprecation of PSPs in Kubernetes 1.25 and the move to PSA can be minimized dramatically. It also allows customers to extend their SLA-backed support confidence to cover ancillary cloud-native tools like Kubewarden (OPA Policy Management) & Elemental (OS Management) UI extensions in Rancher.   

Customers also now get access to the Rancher Prime Knowledgebase, a curated, contextually relevant set of self-service material through the SUSE Collective that gives you direct access to Kubernetes cheat sheets, scalability documentation and white-glove onboarding guidance. Through the Collective, you can also request Product Roadmaps and engage in peer discussions to help accelerate your cloud native journey.   

What’s next?  

If you haven’t already, we encourage you to join the party and test-drive Rancher v2.7.2. Project Snow Cow has a few more releases coming up and that will ensure Rancher at scale is performant. On the Rancher Prime side, customers can expect to see more supported extensions and LTSS options coming in the next iteration. If you are seeking to get more value from your Rancher deployment, get in touch with our team to learn more.   

Remember to stay tuned for updates via our Slack and our GitHub page.  

G2 Ranks SUSE in Top 25 German Companies

Wednesday, 8 February, 2023

I am thrilled to announce that SUSE has been recognized by G2, the world’s largest and most trusted software marketplace, as one of the Top 25 German Companies in their “Best Software Awards” for 2023.

At SUSE, we have always been dedicated to providing our customers with the best possible software solutions and services. This award by G2 is a testament to the hard work and dedication of our entire team. It is also a recognition of the trust and confidence that our customers have placed in us.

This is not the first time G2 has recognized SUSE for delivering excellence to our customers. G2 recently awarded SUSE 15 badges across its product portfolio.

 

Here’s what some of our German customers say about how SUSE’s products have impacted their business:

“To exploit the great potential for innovation in agriculture, our IT must be able to operate with agility. SUSE solutions help us deliver new digital services quickly — without compromising stability and availability.”
Jan Ove Steppat
Open Source Infrastructure Architect
CLAAS KGaA mbH 

“Rancher Prime brings all the functionality we need to deploy, manage and monitor Kubernetes clusters from a central interface, and it’s completely automated. Using OKD, on the other hand, would have required an entire ecosystem of additional solutions, adding further cost and complexity.”
Ronny Becker
Product Owner Platforms
R+V 

“In the last 12 months, we have achieved an availability of exactly 99.99878% for the SAP HANA environment with our platform and have thus been able to support our global business very reliably even in this challenging year. In terms of availability, we thus far exceed the service level agreements that an external service provider could assure us.”
David Kaiser
SAP Manager
REHAU Industries SE & Co 

“From our point of view, Rancher Prime is clearly the most advanced and comprehensive management tool for managing multiple Kubernetes clusters, especially in an environment with high security requirements.”
Frank Bayer
Senior Architect for Operating Systems and Container Services
IT System House, Federal Employment Agency (Bundesagentur für Arbeit)

 

We are grateful to the open source communities and to our employees who work tirelessly every day to make our company a success. A big thank you to our customers, who provided us with valuable feedback and reviews to help us continually improve our product solutions.

I’m excited about the future, and at SUSE we look forward to cooperating with you for many years to come. Thank you again, and here’s to another successful year.

Using Hyperconverged Infrastructure for Kubernetes

Tuesday, 7 February, 2023

Companies face multiple challenges when migrating their applications and services to the cloud, and one of them is infrastructure management.

The ideal scenario would be that all workloads could be containerized. In that case, the organization could use a Kubernetes-based service, like Amazon Web Services (AWS), Google Cloud or Azure, to deploy and manage applications, services and storage in a cloud native environment.

Unfortunately, this scenario isn’t always possible. Some legacy applications are either very difficult or very expensive to migrate to a microservices architecture, so running them on virtual machines (VMs) is often the best solution.

Considering the current trend of adopting multicloud and hybrid environments, managing additional infrastructure just for VMs is not optimal. This is where a hyperconverged infrastructure (HCI) can help. Simply put, HCI enables organizations to quickly deploy, manage and scale their workloads by virtualizing all the components that make up the on-premises infrastructure.

That being said, not all HCI solutions are created equal. In this article, you’ll learn more about what an HCI is and then explore Harvester, an enterprise-grade HCI software that offers you unique flexibility and convenience when managing your infrastructure.

What is HCI?

Hyperconverged infrastructure (HCI) is a type of data center infrastructure that virtualizes computing, storage and networking elements in a single system through a hypervisor.

Since virtualized abstractions managed by a hypervisor replaces all physical hardware components (computing, storage and networking), an HCI offers benefits, including the following:

  • Easier configuration, deployment and management of workloads.
  • Convenience since software-defined data centers (SDDCs) can also be easily deployed.
  • Greater scalability with the integration of more nodes to the HCI.
  • Tight integration of virtualized components, resulting in fewer inefficiencies and lower total cost of ownership (TCO).

However, the ease of management and the lower TCO of an HCI approach come with some drawbacks, including the following:

  • Risk of vendor lock-in when using closed-source HCI platforms.
  • Most HCI solutions force all resources to be increased in order to increase any single resource. That is, new nodes add more computing, storage and networking resources to the infrastructure.
  • You can’t combine HCI nodes from different vendors, which aggravates the risk of vendor lock-in described previously.

Now that you know what HCI is, it’s time to learn more about Harvester and how it can alleviate the limitations of HCI.

What is Harvester?

According to the Harvester website, “Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Kubernetes, KubeVirt and Longhorn.” Harvester is an ideal solution for those seeking a Cloud native HCI offering — one that is both cost-effective and able to place VM workloads on the edge, driving IoT integration into cloud infrastructure.

Because Harvester is open source, this automatically means you don’t have to worry about vendor lock-in. Furthermore, since it’s built on top of Kubernetes, Harvester offers incredible scalability, flexibility and reliability.

Additionally, Harvester provides a comprehensive set of features and capabilities that make it the ideal solution for deploying and managing enterprise applications and services. Among these characteristics, the following stand out:

  • Built on top of Kubernetes.
  • Full VM lifecycle management, thanks to KubeVirt.
  • Support for VM cloud-init templates.
  • VM live migration support.
  • VM backup, snapshot and restore capabilities.
  • Distributed block storage and storage tiering, thanks to Longhorn.
  • Powerful monitoring and logging since Harvester uses Grafana and Prometheus as its observability backend.
  • Seamless integration with Rancher, facilitating multicluster deployments as well as deploying and managing VMs and Kubernetes workloads from a centralized dashboard.

Harvester architectural diagram courtesy of Damaso Sanoja

Now that you know about some of Harvester’s basic features, let’s take a more in-depth look at some of the more prominent features.

How Rancher and Harvester can help with Kubernetes deployments on HCI

Managing multicluster and hybrid-cloud environments can be intimidating when you consider how complex it can be to monitor infrastructure, manage user permissions and avoid vendor lock-in, just to name a few challenges. In the following sections, you’ll see how Harvester, or more specifically, the synergy between Harvester and Rancher, can make life easier for ITOps and DevOps teams.

Straightforward installation

There is no one-size-fits-all approach to deploying an HCI solution. Some vendors sacrifice features in favor of ease of installation, while others require a complex installation process that includes setting up each HCI layer separately.

However, with Harvester, this is not the case. From the beginning, Harvester was built with ease of installation in mind without making any compromises in terms of scalability, reliability, features or manageability.

To do this, Harvester treats each node as an HCI appliance. This means that when you install Harvester on a bare-metal server, behind the scenes, what actually happens is that a simplified version of SLE Linux is installed, on top of which Kubernetes, KubeVirt, Longhorn, Multus and the other components that make up Harvester are installed and configured with minimal effort on your part. In fact, the manual installation process is no different from that of a modern Linux distribution, save for a few notable exceptions:

  • Installation mode: Early on in the installation process, you will need to choose between creating a new cluster (in which case the current node becomes the management node) or joining an existing Harvester cluster. This makes sense since you’re actually setting up a Kubernetes cluster.
  • Virtual IP: During the installation, you will also need to set an IP address from which you can access the main node of the cluster (or join other nodes to the cluster).
  • Cluster token: Finally, you should choose a cluster token that will be used to add new nodes to the cluster.

When it comes to installation media, you have two options for deploying Harvester:

It should be noted that, regardless of the deployment method, you can use a Harvester configuration file to provide various settings. This makes it even easier to automate the installation process and enforce the infrastructure as code (IaC) philosophy, which you’ll learn more about later on.

For your reference, the following is what a typical configuration file looks like (taken from the official documentation):

scheme_version: 1
server_url: https://cluster-VIP:443
token: TOKEN_VALUE
os:
  ssh_authorized_keys:
    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB...
    - github:username
  write_files:
  - encoding: ""
    content: test content
    owner: root
    path: /etc/test.txt
    permissions: '0755'
  hostname: myhost
  modules:
    - kvm
    - nvme
  sysctls:
    kernel.printk: "4 4 1 7"
    kernel.kptr_restrict: "1"
  dns_nameservers:
    - 8.8.8.8
    - 1.1.1.1
  ntp_servers:
    - 0.suse.pool.ntp.org
    - 1.suse.pool.ntp.org
  password: rancher
  environment:
    http_proxy: http://myserver
    https_proxy: http://myserver
  labels:
    topology.kubernetes.io/zone: zone1
    foo: bar
    mylabel: myvalue
install:
  mode: create
  management_interface:
    interfaces:
    - name: ens5
      hwAddr: "B8:CA:3A:6A:64:7C"
    method: dhcp
  force_efi: true
  device: /dev/vda
  silent: true
  iso_url: http://myserver/test.iso
  poweroff: true
  no_format: true
  debug: true
  tty: ttyS0
  vip: 10.10.0.19
  vip_hw_addr: 52:54:00:ec:0e:0b
  vip_mode: dhcp
  force_mbr: false
system_settings:
  auto-disk-provision-paths: ""

All in all, Harvester offers a straightforward installation on bare-metal servers. What’s more, out of the box, Harvester offers powerful capabilities, including a convenient host management dashboard (more on that later).

Host management

Nodes, or hosts, as they are called in Harvester, are the heart of any HCI infrastructure. As discussed, each host provides the computing, storage and networking resources used by the HCI cluster. In this sense, Harvester provides a modern UI that gives your team a quick overview of each host’s status, name, IP address, CPU usage, memory, disks and more. Additionally, your team can perform all kinds of routine operations intuitively just by right-clicking on each host’s hamburger menu:

  • Node maintenance: This is handy when your team needs to remove a node from the cluster for a long time for maintenance or replacement. Once the node enters the maintenance node, all VMs are automatically distributed across the rest of the active nodes. This eliminates the need to live migrate VMs separately.
  • Cordoning a node: When you cordon a node, it’s marked as “unschedulable,” which is useful for quick tasks like reboots and OS upgrades.
  • Deleting a node: This permanently removes the node from the cluster.
  • Multi-disk management: This allows adding additional disks to a node as well as assigning storage tags. The latter is useful to allow only certain nodes or disks to be used for storing Longhorn volume data.
  • KSMtuned mode management: In addition to the features described earlier, Harvester allows your team to tune the use of kernel same-page merging (KSM) as it deploys the KSM Tuning Service ksmtuned on each node as a DaemonSet.

To learn more on how to manage the run strategy and threshold coefficient of ksmtuned, as well as more details on the other host management features described, check out this documentation.

As you can see, managing nodes through the Harvester UI is really simple. However, your ops team will spend most of their time managing VMs, which you’ll learn more about next.

VM management

Harvester was designed with great emphasis on simplifying the management of VMs’ lifecycles. Thanks to this, IT teams can save valuable time when deploying, accessing and monitoring VMs. Following are some of the main features that your team can access from the Harvester Virtual Machines page.

Harvester basic VM management features

As you would expect, the Harvester UI facilitates basic operations, such as creating a VM (including creating Windows VMs), editing VMs and accessing VMs. It’s worth noting that in addition to the usual configuration parameters, such as VM name, disks, networks, CPU and memory, Harvester introduces the concept of the namespace. As you might guess, this additional level of abstraction is made possible by Harvester running on top of Kubernetes. In practical terms, this allows your Ops team to create isolated virtual environments (for example, development and production), which facilitate resource management and security.

Furthermore, Harvester also supports injecting custom cloud-init startup scripts into a VM, which speeds up the deployment of multiple VMs.

Harvester advanced VM management features

Today, any virtualization tool allows the basic management of VMs. In that sense, where enterprise-grade platforms like Harvester stand out from the rest is in their advanced features. These include performing VM backup, snapshot and restoredoing VM live migrationadding hot-plug volumes to running VMs; cloning VMs with volume data; and overcommitting CPU, memory and storage.

While all these features are important, Harvester’s ability to ensure the high availability (HA) of VMs is hands down the most crucial to any modern data center. This feature is available on Harvester clusters with three or more nodes and allows your team to migrate live VMs from one node to another when necessary.

Furthermore, not only is live VM migration useful for maintaining HA, but it is also a handy feature when performing node maintenance when a hardware failure occurs or your team detects a performance drop on one or more nodes. Regarding the latter, performance monitoring, Harvester provides out-of-the-box integration with Grafana and Prometheus.

Built-in monitoring

Prometheus and Grafana are two of the most popular open source observability tools today. They’re highly customizable, powerful and easy to use, making them ideal for monitoring key VMs and host metrics.

Grafana is a data-focused visualization tool that makes it easy to monitor your VM’s performance and health. It can provide near real-time performance metrics, such as CPU and memory usage and disk I/O. It also offers comprehensive dashboards and alerts that are highly configurable. This allows you to customize Grafana to your specific needs and create useful visualizations that can help you quickly identify issues.

Meanwhile, Prometheus is a monitoring and alerting toolkit designed for large-scale, distributed systems. It collects time series data from your VMs and hosts, allowing you to quickly and accurately track different performance metrics. Prometheus also provides alerts when certain conditions have been met, such as when a VM is running low on memory or disk space.

All in all, using Grafana and Prometheus together provide your team with comprehensive observability capabilities by means of detailed graphs and dashboards that can help them to identify why an issue is occurring. This can help you take corrective action more quickly and reduce the impact of any potential issues.

Infrastructure as Code

Infrastructure as code (IaC) has become increasingly important in many organizations because it allows for the automation of IT infrastructure, making it easier to manage and scale. By defining IT infrastructure as code, organizations can manage their VMs, disks and networks more efficiently while also making sure that their infrastructure remains in compliance with the organization’s policies.

With Harvester, users can define their VMs, disks and networks in YAML format, making it easier to manage and version control virtual infrastructure. Furthermore, thanks to the Harvester Terraform provider, DevOps teams can also deploy entire HCI clusters from scratch using IaC best practices.

This allows users to define the infrastructure declaratively, allowing operations teams to work with developer tools and methodologies, helping them become more agile and effective. In turn, this saves time and cost and also enables DevOps teams to deploy new environments or make changes to existing ones more efficiently.

Finally, since Harvester enforces IaC principles, organizations can make sure that their infrastructure remains compliant with security, regulatory and governance policies.

Rancher integration

Up to this point, you’ve learned about key aspects of Harvester, such as its ease of installation, its intuitive UI, its powerful built-in monitoring capabilities and its convenient automation, thanks to IaC support. However, the feature that takes Harvester to the next level is its integration with Rancher, the leading container management tool.

Harvester integration with Rancher allows DevOps teams to manage VMs and Kubernetes workloads from a single control panel. Simply put, Rancher integration enables your organization to combine conventional and Cloud native infrastructure use cases, making it easier to deploy and manage multi-cloud and hybrid environments.

Furthermore, Harvester’s tight integration with Rancher allows your organization to streamline user and system management, allowing for more efficient infrastructure operations. Additionally, user access control can be centralized in order to ensure that the system and its components are protected.

Rancher integration also allows for faster deployment times for applications and services, as well as more efficient monitoring and logging of system activities from a single control plane. This allows DevOps teams to quickly identify and address issues related to system performance, as well as easily detect any security risks.

Overall, Harvester integration with Rancher provides DevOps teams with a comprehensive, centralized system for managing both VMs and containerized workloads. In addition, this approach provides teams with improved convenience, observability and security, making it an ideal solution for DevOps teams looking to optimize their infrastructure operations.

Conclusion

One of the biggest challenges facing companies today is migrating their applications and services to the cloud. In this article, you’ve learned how you can manage Kubernetes and VM-based environments with the aid of Harvester and Rancher, thus facilitating your application modernization journey from monolithic apps to microservices.

Both Rancher and Harvester are part of the rich SUSE ecosystem that helps your business deploy multi-cloud and hybrid-cloud environments easily across any infrastructure. Harvester is an open source HCI solution. Try it for free today.

Tags: ,,,, Category: Uncategorized Comments closed

How To Simplify Your Kubernetes Adoption Using Rancher

Wednesday, 1 February, 2023

Kubernetes has firmly established itself as the leading choice for container orchestration thanks to its robust ecosystem and flexibility, allowing users to scale their workloads easily. However, the complexity of Kubernetes can make it challenging to set up and may pose a significant barrier for organizations looking to adopt cloud native technology and containers as part of their modernization efforts.
 

In this blog post, we’ll look at how Rancher can help infrastructure operators simplify the process of adopting Kubernetes into their ecosystem. We’ll explore how Rancher provides a range of features and tools that make it easier to deploy, manage, and secure containerized applications and Kubernetes clusters.
 

Let’s start analyzing the main challenges for Kubernetes adoption and how Rancher tackles them.   

Challenge #1: Kubernetes is Complex 

One of the main challenges of adopting Kubernetes is the learning curve required to understand the orchestration platform and its implementation. Kubernetes has a large and complex codebase with many moving parts and a rapidly growing ecosystem. This can make it difficult for organizations to get up and running confidently, as these issues can blur the decisions required to determine the needed resources. Kubernetes talent remains difficult to source. Organizations with a preference for in-house, dedicated support may struggle to fill roles and scale the business growth at the speed they wish.
 

Utilizing a Kubernetes Management Platform (KMP) like Rancher can help alleviate some of these resourcing roadblocks by simplifying Kubernetes management and operations. Rancher’s provides a user-friendly web interface for managing Kubernetes clusters and applications, which can be used by developers and operations teams alike, and encourages domain specialists to upskill and transfer knowledge across teams.
 

Rancher also includes graphical cluster management, application templates, and one-click deployments, making it easier to deploy and manage applications hosted on Kubernetes and encouraging teams to utilize templatized processes to avoid over-complicating deployments. Rancher also has several built-in tools and integrations, such as monitoring, logging, and alerting, which can help teams get insights into their Kubernetes deployments faster.   

Challenge #2: Lack of Integration with Existing Tools and Workflows   

Another challenge of adopting Kubernetes is integrating an organization’s existing tools and workflows. Many teams already have various tools and processes to manage their applications and infrastructure, and introducing a new platform like Kubernetes can often disrupt these established processes.  

However, choosing a KMP like Rancher, which out-of-the-box integrates with multiple tools and platforms, from cloud providers to container registries, and continuous integration/continuous deployment (CI/CD) tools, enables organizations to adopt and implement Kubernetes alongside their existing stack. 

Challenge #3: Security is Now Top of Mind   

As more enterprises transition their stack to cloud native, security across Kubernetes environments has become top of mind for them. Kubernetes includes built-in basic security features, such as role-based access control (RBAC) and Pod Security Admission. However, learning to configure these features in addition to your stack’s existing security levels can be a maze at best and potentially expose weaknesses in your environment. Given Kubernetes’ dynamic nature, identifying, analyzing, and mitigating security incidents without the proper tools is a big challenge. 

 Rancher includes several protective features and integrations with security solutions to help organizations fortify their Kubernetes clusters and deployments. These include out-of-the-box support for RBAC, Authentication Proxy, CIS and vulnerability scanning, amongst others.  

 Rancher also provides integration with security-focused solutions, including SUSE NeuVector and Kubewarden.  

 

SUSE Neuvector provides comprehensive container security throughout the entire lifecycle, from development to production. It scans container registries and images and uses behavioral-based zero-trust security policies and advanced Deep Packet Inspection technology to prevent attacks from spreading or reaching the applications at the network level. This enables teams to implement zero-trust practices across their container environments easily. 

 

Kubewarden is a CNCF incubating project that delivers policy-as-code. Leveraging the power of WASM, Kubewarden allows writing security policies in your language of choice (Rego, Rust, Go, Swift, …) and controls policies not just during deployment but also handling mutations and runtime modifications.  

 

Both solutions help users build a better-fortified Kubernetes environment whilst minimizing the operational overhead needed to maintain a productive environment.   

Rancher’s out-of-the-box monitoring and auditing capabilities for Kubernetes clusters and applications help organizations get real-time data to identify and address any potential security issues quickly, reducing operational downtime and preventing substantial impact on an organization’s bottom line.  

In addition to all the products and features, it is crucial to secure and harden our environments properly. Rancher has undergone the DISA certification process for its multi-cluster management solution and the RKE2 Kubernetes distributions, making them the only solutions currently certified in this space. As a result, you can use the DISA-approved STIG guides for Rancher and RKE2 to implement a customized hardening approach for your specific use case.  

Challenge #4: Management and Automation   

As the number of clusters and containerized applications grows, the complexity of automating, configuring, and securing the environments skyrockets. As more organizations choose to modernize with Kubernetes, the reliance on automation, compliance and security of deployments is becoming more critical. Teams need solutions that can help their organization scale safely.
 

Rancher includes Fleet, a continuous delivery tool that helps your organization implement GitOps practices. The benefits of using GitOps in Kubernetes include the following:  

  1. Version Control: Git provides a way to track and manage changes to the cluster’s desired state, making it easy to roll back or revert changes.  
  2. Encourages Collaboration: Git makes it easy for multiple team members to work on the same cluster configuration and review and approve changes before deployment.  
  3. Utilize Automation: By using Git as the source of truth, changes can be automatically propagated to the cluster, reducing the risk of human error.  
  4. Improve Visibility: Git provides an auditable history of changes to the cluster, making it easy to see who made changes, when, and why.   

Conclusion: 

Adopting Kubernetes doesn’t have to be hard. Finding reliable solutions like Rancher can help teams better manage their clusters and applications on Kubernetes. KMP platforms help reduce the entry barrier to adopting Kubernetes and help ease the transition from traditional IT to cloud native architectures. 
 

For Kubernetes users who need additional support and services, there is Rancher Prime – the complete product and support subscription package of Rancher. Enterprises adopting Kubernetes and utilizing Rancher Prime have seen substantial economic benefits, which you can learn more about in Forrester’s ‘Total Economic Impact’ Report on Rancher Prime. 

Tags: ,, Category: Rancher Blog, Security Comments closed

Challenges and Solutions with Cloud Native Persistent Storage

Wednesday, 18 January, 2023

Persistent storage is essential for any account-driven website. However, in Kubernetes, most resources are ephemeral and unsuitable for keeping data long-term. Regular storage is tied to the container and has a finite life span. Persistent storage has to be separately provisioned and managed.

Making permanent storage work with temporary resources brings challenges that you need to solve if you want to get the most out of your Kubernetes deployments.

In this article, you’ll learn about what’s involved in setting up persistent storage in a cloud native environment. You’ll also see how tools like Longhorn and Rancher can enhance your capabilities, letting you take full control of your resources.

Persistent storage in Kubernetes: challenges and solutions

Kubernetes has become the go-to solution for containers, allowing you to easily deploy scalable sites with a high degree of fault tolerance. In addition, there are many tools to help enhance Kubernetes, including Longhorn and Rancher.

Longhorn is a lightweight block storage system that you can use to provide persistent storage to Kubernetes clusters. Rancher is a container management tool that helps you with the challenges that come with running multiple containers.

You can use Rancher and Longhorn together with Kubernetes to take advantage of both of their feature sets. This gives you reliable persistent storage and better container management tools.

How Kubernetes handles persistent storage

In Kubernetes, files only last as long as the container, and they’re lost if the container crashes. That’s a problem when you need to store data long-term. You can’t afford to lose everything when the container disappears.

Persistent Volumes are the solution to these issues. You can provision them separately from the containers they use and then attach them to containers using a PersistentVolumeClaim, which allows applications to access the storage:

Diagram showing the relationship between container application, its own storage and persistent storage courtesy of James Konik

However, managing how these volumes interact with containers and setting them up to provide the combination of security, performance and scalability you need bring further issues.

Next, you’ll take a look at those issues and how you can solve them.

Security

With storage, security is always a key concern. It’s especially important with persistent storage, which is used for user data and other critical information. You need to make sure the data is only available to those that need to see it and that there’s no other way to access it.

There are a few things you can do to improve security:

Use RBAC to limit access to storage resources

Role-based access control (RBAC) lets you manage permissions easily, granting users permissions according to their role. With it, you can specify exactly who can access storage resources.

Kubernetes provides RBAC management and allows you to assign both Roles, which apply to a specific namespace, and ClusterRoles, which are not namespaced and can be used to give permissions on a cluster-wide basis.

Tools like Rancher also include RBAC support. Rancher’s system is built on top of Kubernetes RBAC, which it uses for enforcement.

With RBAC in place, not only can you control who accesses what, but you can change it easily, too. That’s particularly useful for enterprise software managers who need to manage hundreds of accounts at once. RBAC allows them to control access to your storage layer, defining what is allowed and changing those rules quickly on a role-by-role level.

Use namespaces

Namespaces in Kubernetes allow you to create groups of resources. You can then set up different access control rules and apply them independently to each namespace, giving you extra security.

If you have multiple teams, it’s a good way to stop them from getting in each other’s way. It also keeps its resources private to their namespace.

Namespaces do provide a layer of basic security, compartmentalizing teams and preventing users from accessing what you don’t want them to.

However, from a security perspective, namespaces do have limitations. For example, they don’t actually isolate all the shared resources that the namespaced resources use. That means if an attacker gets escalated privileges, they can access resources on other namespaces served by the same node.

Scalability and performance

Delivering your content quickly provides a better user experience, and maintaining that quality as your traffic increases and decreases adds an additional challenge. There are several techniques to help your apps cope:

Use storage classes for added control

Kubernetes storage classes let you define how your storage is used, and there are various settings you can change. For example, you can choose to make classes expandable. That way, you can get more space if you run out without having to provision a new volume.

Longhorn has its own storage classes to help you control when Persistent Volumes and their containers are created and matched.

Storage classes let you define the relationship between your storage and other resources, and they are an essential way to control your architecture.

Dynamically provision new persistent storage for workloads

It isn’t always clear how much storage a resource will need. Provisioning dynamically, based on that need, allows you to limit what you create to what is required.

You can have your storage wait until a container that uses it is created before it’s provisioned, which avoids the wasted overhead of creating storage that is never used.

Using Rancher with Longhorn’s storage classes lets you provision storage dynamically without having to rely on cloud services.

Optimize storage based on use

Persistent storage volumes have various properties. Their size is an obvious one, but latency and CPU resources also matter.

When creating persistent storage, make sure that the parameters used reflect what you need to use it for. A service that needs to respond quickly, such as a login service, can be optimized for speed.

Using different storage classes for different purposes is easier when using a provider like Longhorn. Longhorn storage classes can specify different disk technologies, such as NVME, SSD, or rotation, and these can be linked to specific nodes allowing you to match storage to your requirements closely.

Stability

Building a stable product means getting the infrastructure right and aggressively looking for errors. That way, your product quality will be as high as possible.

Maximize availability

Outages cost time and money, so avoiding them is an obvious goal.

When they do occur, planning for them is essential. With cloud storage, you can automate reprovisioning of failed volumes to minimize user disruption.

To prevent data loss, you must ensure dynamically provisioned volumes aren’t automatically deleted when a resource is done with them. Kubernetes enables the use protection on volumes, so they aren’t immediately lost.

You can control the behavior of storage volumes by setting the reclaim policy. Picking the retain option lets you manually choose what to do with the data and prevents it from being deleted automatically.

Monitor metrics

As well as challenges, working with cloud volumes also offers advantages. Cloud providers typically include many strong options for monitoring volumes, facilitating a high level of observability.

Rancher makes it easier to monitor Kubernetes clusters. Its built-in Grafana dashboards let you view data for all your resources.

Rancher collects memory and CPU data by default, and you can break this data down by workload using PromQL queries.

For example, if you wanted to know how much data was being read to a disk by a workload, you’d use the following PromQL from Rancher’s documentation:


sum(rate(container_fs_reads_bytes_total{namespace="$namespace",pod_name=~"$podName",container_name!=""}[5m])) by (pod_name)

Longhorn also offers a detailed selection of metrics for monitoring nodes, volumes, and instances. You can also check on the resource usage of your manager, along with the size and status of backups.

The observability these metrics provide has several uses. You should log any detected errors in as much detail as possible, enabling you to identify and solve problems. You should also monitor performance, perhaps setting alerts if it drops below any particular threshold. The same goes for error logging, which can help you spot issues and resolve them before they become too serious.

Get the infrastructure right for large products

For enterprise-grade products that require fast, reliable distributed block storage, Longhorn is ideal. It provides a highly resilient storage infrastructure. It has features like application-aware snapshots and backups as well as remote replication, meaning you can protect your data at scale.

Longhorn provides enterprise-grade distributed block storage and facilitates deploying a highly resilient storage infrastructure. It lets you provision storage on the major cloud providers, with built-in support for AzureGoogle Cloud Platform (GCP) and Amazon Web Services (AWS).

Longhorn also lets you spread your storage over multiple availability zones (AZs). However, keep in mind that there can be latency issues if volume replicas reside in different regions.

Conclusion

Managing persistent storage is a key challenge when setting up Kubernetes applications. Because Persistent Volumes work differently from regular containers, you need to think carefully about how they interact; how you set things up impacts your application performance, security and scalability.

With the right software, these issues become much easier to handle. With help from tools like Longhorn and Rancher, you can solve many of the problems discussed here. That way, your applications benefit from Kubernetes while letting you keep a permanent data store your other containers can interact with.

SUSE is an open source software company responsible for leading cloud solutions like Rancher and Longhorn. Longhorn is an easy, fast and reliable Cloud native distributed storage platform. Rancher lets you manage your Kubernetes clusters to ensure consistency and security. Together, these and other products are perfect for delivering business-critical solutions.

SUSE Receives 15 Badges in the Winter G2 Report Across its Product Portfolio

Thursday, 12 January, 2023

 

 

 

 

 

I’m pleased to share that G2, the world’s largest and most trusted tech marketplace, has recognized our solutions in its 2023 Winter Report. We received a total of 15 badges across our business units for Rancher, SUSE Linux Enterprise Server (SLES), SLE Desktop and SLE Real Time – including the Users Love Us badge for all products – as well as three badges for the openSUSE community with Leap and Tumbleweed.

We recently celebrated 30 years of service to our customers, partners and the open source communities and it’s wonderful to keep the celebrations going with this recognition by our peers. Receiving 15 badges this quarter reinforces the depth and breadth of our strong product portfolio as well as the dedication that our team provides for our customers.

As the use of hybrid, multi-cloud and cloud native infrastructures grows, many of our customers are looking to containers. For their business success, they look to Rancher, which has been the leading multi-cluster management for nearly a decade and has one of the strongest adoption rates in the industry.

G2 awarded Rancher four badges, including High Performer badges in the Container Management and the Small Business Container Management categories and Most Implementable and Easiest Admin in the Small Business Container Management category.

Tacking on to the latest badges that SLES received in October, SLES received Momentum Leader and Leader in the Server Virtualization category once again; Momentum Leader and High Performer in the Infrastructure as a Service category; and two badges in the Mid-Market Server Virtualization category for Best Support and High Performer.

In addition, SLE Desktop was again awarded two High Performer badges in the Mid-Market Operating System and Operating System categories. SLE Real Time also received a High Performer badge in the Operating System category. The openSUSE community distribution Leap was recognized as the Fastest Implementation in the Operating System category. It’s clear that our Business Critical Linux solutions continue to be the cornerstone of success for many of our customers and that we continue to provide excellent service for the open source community.

Here’s what some of our customers said in their reviews on G2:

“[Rancher is a] complete package for Kubernetes.”

“RBAC simple management is one of the best upsides in Rancher, attaching Rancher post creation process to manage RBAC, ingress and [getting] a simple UI overview of what is going on.”

“ [Rancher is the] best tool for managing multiple production clusters of Kubernetes orchestration. Easy to deploy services, scale and monitor services on multiple clusters.”

“SLES the best [for] SAP environments. The support is fast and terrific.”

Providing our customers with solutions that they know they can rely on and trust is critical to the work we do every day. These badges are a direct response to customer feedback and product reviews and underscore our ability to serve the needs of our customers for all of our solutions. I’m looking forward to seeing what new badges our team will be rewarded in the future as a result of their excellent work.