Why SAP Cloud Adoption Needs a Supported and Secure Enterprise Kubernetes Infrastructure On-Premises to Run Integration Processes

Monday, 6 November, 2023

When you run your SAP on-premises, nobody doubts you need a dedicated, certified Linux environment with enterprise support to run this business-critical application. But what happens when you need to run the new containerized SAP Integration Suite component on-premises? Why an enterprise-supported Kubernetes like SUSE’s Rancher Prime is needed and why you should consider a standalone Kubernetes environment is what we are going to explain in this blog.

 

In the ever-evolving landscape of SAP Cloud adoption, two fundamental considerations emerge: the role of a secure Kubernetes infrastructure and the necessity of running on-premises integration components. SUSE’s Rancher Kubernetes, included in Rancher Prime, has been selected by SAP as one of the first on-premises supported enterprise Kubernetes platforms for running integration components. As previously done by SAP with SAP Data Intelligence, SUSE is chosen by default again as a trusted Kubernetes provider to run SAP containerized software. This choice prompts us to delve deeper into the criticality of integration layers and the platforms that support them.

The SAP Edge Integration Cell: Keeping Your Data and Applications Secure

At the heart of this discussion is the “SAP Integration Suite,” with a pivotal on-premises component known as the “SAP Edge Integration Cell.” This integration software serves as the linchpin that seamlessly connects your on-premises applications and data with the evolving SAP Cloud, all within the secure confines of your data center. By avoiding direct connections between the Cloud and on-premises applications, it safeguards data confidentiality and ensures the security of your on-premises operations, as explained in the blog “Keeping sensitive data on-premise with Edge Integration Cell”. This synergy aligns perfectly with SAP’s strategic shift towards cloud-based solutions, empowering your business to embrace the future of SAP while maintaining the integrity of your on-premises operations.

The Key Question: How Critical is SAP Integration for Your Business?

As you contemplate the significance of SAP Cloud integration for your business, consider this: What happens if the connection between SAP Cloud and your billing system or factory is disrupted? The answer is clear: if your SAP integration layer is down, your business is stopped, making downtime not an option. And there is the derivative question: what happens if your Kubernetes environment is compromised and a hacker can breach it? It means security is not optional. These questions underscore the importance of choosing an enterprise-supported software platform, just as with any other critical SAP software. Such a platform is essential for quickly resolving incidents and ensuring uninterrupted business operations. When you are talking about on-premises environments, only SUSE’s RKE2 (Rancher Kubernetes Environment) supported in Rancher Prime will offer today the enterprise-grade support needed. An enterprise-supported and secure platform to run this integration layer becomes paramount to ensure the reliability of your system. SUSE, with its extensive experience, is well-equipped to support this critical SAP environment. Rancher Prime, in turn, provides the necessary infrastructure, much like SUSE Linux Enterprise Server for SAP Applications supported SAP HANA for years.SAP Edge Integration Cell running on Rancher by SUSE

Why use my own Kubernetes environment in my SAP project

As you contemplate the multifaceted world of SAP Cloud integration, another pivotal consideration emerges: the significance of deploying your own Rancher Kubernetes environment within your SAP department.

SAP Integration in a Containerized World

Like many other modern applications, the new Edge Integration Cell for SAP’s Integration Suite is designed for and operates on a Kubernetes based container management environment. Nevertheless, relying on your existing corporate Kubernetes environment for the SAP Integration may not always be the best solution because existing general-purpose Kubernetes environments may not have a specific SAP architecture in terms of availability, life cycle and security. Moreover, not all Kubernetes platforms are certified to host the SAP integration components, so you need a solution tested and trusted for business-critical SAP solutions like the new Edge Integration Cell.

Therefore, there will be challenges that need to be addressed before adopting a Kubernetes environment for your SAP integration layer, some of the most relevant will be:

Avoid Delays in the SAP Project and Control the SAP Environment.

A Company’s corporate Kubernetes environment typically falls under the purview of a separate IT department, distinct from the SAP department and partners in charge of the SAP projects. This department separation can lead to delays in project execution due to the need for interaction and coordination between these departments. A dedicated Kubernetes environment may help you avoid delays and enhance control over the SAP Integration project.

The Criticality of the Integration Layer

The SAP Integration Suite plays a central role in connecting critical SAP and enterprise non-SAP applications that handle confidential data. Many corporate Kubernetes environments within organizations are multitenant setups, overseeing thousands of containers, each subject to its security measures and Service Level Agreements (SLAs). Unfortunately, this complex setup often falls short of meeting the criticality and security requirements of the SAP integration layer. And changes in a corporate environment are not easy to manage.

Near to Your Applications Environments, Anywhere Including Edge

Another compelling reason to consider the “SAP Edge Integration Cell” and its supporting infrastructure is its proximity to your connected applications. This proximity might entail various locations, including edge environments, such as factories. These will require a Kubernetes environment flexible enough to fit in any environment where Kubernetes is required. Rancher is an ideal choice for this approach, as its architecture is more compact compared to most other enterprise Kubernetes solutions, allowing for a wider set of scenarios and topologies covered, from the edge to enterprise-grade datacenters.

In multi-site scenarios like edge environments, the addition of Rancher Management Server becomes invaluable for seamlessly managing multiple locations in a centralized way. Additionally, SUSE’s Harvester virtualization solution empowers your SAP project by enabling the deployment of virtualization environment appliances in edge locations to run Rancher Kubernetes clusters. Harvester backed virtualization appliances can efficiently convert any virtualization needs and allocate the required virtualized resources with the flexibility needed for your SAP projects

SUSE’s Rancher Prime: Streamlining Management

To overcome these challenges, deploying your own dedicated, simple Kubernetes environment within your SAP department for SAP projects becomes an appealing solution. This dedicated environment operates like a specialized appliance designed to efficiently run the necessary SAP components.
In this complex landscape, SUSE’s Rancher solutions provide the necessary tools and support to expedite and simplify SAP environment deployment, management, and security. This approach ensures that you can keep pace with your SAP projects, meet the critical SLAs required for SAP operations, ensure business continuity, and most importantly, operate within an SAP-certified platform. This alignment with industry standards and best practices secures the efficiency and security of your SAP environment.

Conclusion:

As we navigate the intricate world of SAP Cloud integration, one truth becomes evident: the integration of your on-premises processes with the cloud is not a matter of choice but a necessity for uninterrupted business operations. The secure and reliable platform you choose to run these integration layers serves as the foundation for your success.
With SUSE’s Rancher Prime offering, you have the experience, infrastructure, and tools you need to safeguard your critical SAP environment and confidently embrace the future of SAP. Your strategic decisions in this ever-evolving landscape pave the way for efficient SAP management practices, unwavering security, and compliance with industry standards, positioning your organization for a successful journey into the SAP Cloud era.

Business and operational security in the context of Artificial Intelligence

Tuesday, 17 October, 2023

This is a guest blog by Udo Würtz, Fujitsu Fellow, CDO and Business Development Director of the Fujitsu’s European Platform Business. Read more about Udo, including how to contact him, below.

 

Deploying AI systems in an organization requires significant investments in technology, talent, and training. There is a fear that the expected ROI (return on investment) will not materialize, especially if the deployment does not meet business needs.

This is where a reference architecture like the AI Test Drive comes into play. It allows companies to test the feasibility and return on investment of AI solutions in a controlled environment before committing to significant investments. AI Test Drive thus addresses not only technical risks, but also commercial risks, enabling companies to make informed decisions.

The field of data science is rapidly evolving, and many professionals are looking for a reliable platform to effectively evaluate AI applications. However, such architectures must support a range of cutting-edge technologies. So let’s examine each technology component and its importance in this context.

  1. Platform and Cluster Management with SUSE Rancher:

Kubernetes has become the gold standard for container orchestration. Rancher, a comprehensive Kubernetes management tool, supports the operations and scalability of AI models. It allows the management of Kubernetes clusters across multiple cloud environments, simplifying the roll-out and management of AI applications.

  1. Hyper-convergence with Harvester:

In contemporary AI environments, which are usually cloud native environments, the capacity for hyper-convergence—integrating computation, storage, and networking into one solution—is invaluable. Harvester offers this capability, leading to enhanced efficiency and scalability for AI applications.

  1. Computational Power through Intel:

Intel technologies, notably the Intel® Xeon® Scalable processors, are fine-tuned for AI applications. Additional features like the Intel® Deep Learning Boost accelerate deep learning tasks. In particular, the Gen 4 has separate AI accelerators on board, which makes this type of Processor significantly different from the previous ones and delivers incredible performance. In a project involving vehicle detection, the Gen 3 had an inference of 30 frames / s. This was a very good performance. Gen 4 of over 5000(!) frames/s, due to the accelerators inside the chip.

  1. Storage Solutions with NetApp:

Data is the core of AI. NetApp provides efficient storage solutions specially designed to store and process massive datasets, which is crucial for AI projects.

  1. Parallel Processing with NVIDIA:

The parallel processing capability that NVIDIA GPUs bring to the table is invaluable in AI applications where large datasets must be processed simultaneously. 

  1. Network Infrastructure by Juniper:

The backbone of every AI platform is its networking. Juniper delivers advanced network solutions ensuring efficient, bottleneck-free data traffic flow. This is vital in AI settings where there are demands for low latency and high bandwidth.

Now You Can Evaluate Your AI Projects Practically & Technically:

The Fujitsu AI Test Drive amalgamates tried-and-true technologies into a cohesive platform, granting data scientists the ability to evaluate their AI projects both pragmatically and technically. By accessing such deep technological resources, users can pinpoint the tools and infrastructure that best align with their unique AI challenges.

Share your idea and we share knowledge and resources.

What is your vision for a business model that fully exploits the possibilities of innovative IT concepts? Do you already have a vision that you are implementing concretely? Or do you still lack the necessary resources on the way from the idea to realization, for example technical expertise, budget and sufficient test capacities?

We’re pleased to introduce the Fujitsu Lighthouse Initiative, a special program, designed to foster prototyping and drive technological endeavors, ensuring businesses harness the full potential of emerging technologies.​ The initiative isn’t just about gaining support for your Digital Innovation and prototyping projects; it’s a pathway to joint project realization. Selected projects can benefit from a project support pool of €100,000, to be used tailored to these project’s unique requirements. Together, we will leverage Fujitsu’s resources, expertise, and vast ecosystem to turn visionary ideas into tangible outcomes.

Register today for the Fujitsu Lighthouse Initiative.

 

Related infographic

About the Author:

Udo Würtz is Chief Data Officer ( CDO of the Fujitsu European Platform Business. In his function he advises customers at C level (CIO, CTO, CEO, CDO, CFO) on strategies, technologies and new trends in the IT business. Before joining Fujitsu, he worked for 17 years as CIO for a large retail company and later for a Cloud Service Provider, where he was responsible for the implementation of secure and highly available IT architectures. Subsequently, he was appointed by the Federal Ministry of Economics and Technology as an expert for the Trusted Cloud Program of the Federal Government in Berlin. Udo Würtz is intensively involved in Fujitsu’s activities in the fields of artificial intelligence (AI), container technologies and the Internet of Things (IoT) and, as a Fujitsu Fellow, gives lectures and live demos on these topics. He also runs his own YouTube channel on the subject of AI.

Getting Started with Cluster Autoscaling in Kubernetes

Tuesday, 12 September, 2023

Autoscaling the resources and services in your Kubernetes cluster is essential if your system is going to meet variable workloads. You can’t rely on manual scaling to help the cluster handle unexpected load changes.

While cluster autoscaling certainly allows for faster and more efficient deployment, the practice also reduces resource waste and helps decrease overall costs. When you can scale up or down quickly, your applications can be optimized for different workloads, making them more reliable. And a reliable system is always cheaper in the long run.

This tutorial introduces you to Kubernetes’s Cluster Autoscaler. You’ll learn how it differs from other types of autoscaling in Kubernetes, as well as how to implement Cluster Autoscaler using Rancher.

The differences between different types of Kubernetes autoscaling

By monitoring utilization and reacting to changes, Kubernetes autoscaling helps ensure that your applications and services are always running at their best. You can accomplish autoscaling through the use of a Vertical Pod Autoscaler (VPA)Horizontal Pod Autoscaler (HPA) or Cluster Autoscaler (CA).

VPA is a Kubernetes resource responsible for managing individual pods’ resource requests. It’s used to automatically adjust the resource requests and limits of individual pods, such as CPU and memory, to optimize resource utilization. VPA helps organizations maintain the performance of individual applications by scaling up or down based on usage patterns.

HPA is a Kubernetes resource that automatically scales the number of replicas of a particular application or service. HPA monitors the usage of the application or service and will scale the number of replicas up or down based on the usage levels. This helps organizations maintain the performance of their applications and services without the need for manual intervention.

CA is a Kubernetes resource used to automatically scale the number of nodes in the cluster based on the usage levels. This helps organizations maintain the performance of the cluster and optimize resource utilization.

The main difference between VPA, HPA and CA is that VPA and HPA are responsible for managing the resource requests of individual pods and services, while CA is responsible for managing the overall resources of the cluster. VPA and HPA are used to scale up or down based on the usage patterns of individual applications or services, while CA is used to scale the number of nodes in the cluster to maintain the performance of the overall cluster.

Now that you understand how CA differs from VPA and HPA, you’re ready to begin implementing cluster autoscaling in Kubernetes.

Prerequisites

There are many ways to demonstrate how to implement CA. For instance, you could install Kubernetes on your local machine and set up everything manually using the kubectl command-line tool. Or you could set up a user with sufficient permissions on Amazon Web Services (AWS), Google Cloud Platform (GCP) or Azure to play with Kubernetes using your favorite managed cluster provider. Both options are valid; however, they involve a lot of configuration steps that can distract from the main topic: the Kubernetes Cluster Autoscaler.

An easier solution is one that allows the tutorial to focus on understanding the inner workings of CA and not on time-consuming platform configurations, which is what you’ll be learning about here. This solution involves only two requirements: a Linode account and Rancher.

For this tutorial, you’ll need a running Rancher Manager server. Rancher is perfect for demonstrating how CA works, as it allows you to deploy and manage Kubernetes clusters on any provider conveniently from its powerful UI. Moreover, you can deploy it using several providers, including these popular options:

If you are curious about a more advanced implementation, we suggest reading the Rancher documentation, which describes how to install Cluster Autoscaler on Rancher using Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling groups. However, please note that implementing CA is very similar on different platforms, as all solutions leverage Kubernetes Cluster API for their purposes. Something that will be addressed in more detail later.

What is Cluster API, and how does Kubernetes CA leverage it

Cluster API is an open source project for building and managing Kubernetes clusters. It provides a declarative API to define the desired state of Kubernetes clusters. In other words, Cluster API can be used to extend the Kubernetes API to manage clusters across various cloud providers, bare metal installations and virtual machines.

In comparison, Kubernetes CA leverages Cluster API to enable the automatic scaling of Kubernetes clusters in response to changing application demands. CA detects when the capacity of a cluster is insufficient to accommodate the current workload and then requests additional nodes from the cloud provider. CA then provisions the new nodes using Cluster API and adds them to the cluster. In this way, the CA ensures that the cluster has the capacity needed to serve its applications.

Because Rancher supports CA and RKE2, and K3s works with Cluster API, their combination offers the ideal solution for automated Kubernetes lifecycle management from a central dashboard. This is also true for any other cloud provider that offers support for Cluster API.

Link to the Cluster API blog

Implementing CA in Kubernetes

Now that you know what Cluster API and CA are, it’s time to get down to business. Your first task will be to deploy a new Kubernetes cluster using Rancher.

Deploying a new Kubernetes cluster using Rancher

Begin by navigating to your Rancher installation. Once logged in, click on the hamburger menu located at the top left and select Cluster Management:

Rancher's main dashboard

On the next screen, click on Drivers:

**Cluster Management | Drivers**

Rancher uses cluster drivers to create Kubernetes clusters in hosted cloud providers.

For Linode LKE, you need to activate the specific driver, which is simple. Just select the driver and press the Activate button. Once the driver is downloaded and installed, the status will change to Active, and you can click on Clusters in the side menu:

Activate LKE driver

With the cluster driver enabled, it’s time to create a new Kubernetes deployment by selecting Clusters | Create:

**Clusters | Create**

Then select Linode LKE from the list of hosted Kubernetes providers:

Create LKE cluster

Next, you’ll need to enter some basic information, including a name for the cluster and the personal access token used to authenticate with the Linode API. When you’ve finished, click Proceed to Cluster Configuration to continue:

**Add Cluster** screen

If the connection to the Linode API is successful, you’ll be directed to the next screen, where you will need to choose a region, Kubernetes version and, optionally, a tag for the new cluster. Once you’re ready, press Proceed to Node pool selection:

Cluster configuration

This is the final screen before creating the LKE cluster. In it, you decide how many node pools you want to create. While there are no limitations on the number of node pools you can create, the implementation of Cluster Autoscaler for Linode does impose two restrictions, which are listed here:

  1. Each LKE Node Pool must host a single node (called Linode).
  2. Each Linode must be of the same type (eg 2GB, 4GB and 6GB).

For this tutorial, you will use two node pools, one hosting 2GB RAM nodes and one hosting 4GB RAM nodes. Configuring node pools is easy; select the type from the drop-down list and the desired number of nodes, and then click the Add Node Pool button. Once your configuration looks like the following image, press Create:

Node pool selection

You’ll be taken back to the Clusters screen, where you should wait for the new cluster to be provisioned. Behind the scenes, Rancher is leveraging the Cluster API to configure the LKE cluster according to your requirements:

Cluster provisioning

Once the cluster status shows as active, you can review the new cluster details by clicking the Explore button on the right:

Explore new cluster

At this point, you’ve deployed an LKE cluster using Rancher. In the next section, you’ll learn how to implement CA on it.

Setting up CA

If you’re new to Kubernetes, implementing CA can seem complex. For instance, the Cluster Autoscaler on AWS documentation talks about how to set permissions using Identity and Access Management (IAM) policies, OpenID Connect (OIDC) Federated Authentication and AWS security credentials. Meanwhile, the Cluster Autoscaler on Azure documentation focuses on how to implement CA in Azure Kubernetes Service (AKS), Autoscale VMAS instances and Autoscale VMSS instances, for which you will also need to spend time setting up the correct credentials for your user.

The objective of this tutorial is to leave aside the specifics associated with the authentication and authorization mechanisms of each cloud provider and focus on what really matters: How to implement CA in Kubernetes. To this end, you should focus your attention on these three key points:

  1. CA introduces the concept of node groups, also called by some vendors autoscaling groups. You can think of these groups as the node pools managed by CA. This concept is important, as CA gives you the flexibility to set node groups that scale automatically according to your instructions while simultaneously excluding other node groups for manual scaling.
  2. CA adds or removes Kubernetes nodes following certain parameters that you configure. These parameters include the previously mentioned node groups, their minimum size, maximum size and more.
  3. CA runs as a Kubernetes deployment, in which secrets, services, namespaces, roles and role bindings are defined.

The supported versions of CA and Kubernetes may vary from one vendor to another. The way node groups are identified (using flags, labels, environmental variables, etc.) and the permissions needed for the deployment to run may also vary. However, at the end of the day, all implementations revolve around the principles listed previously: auto-scaling node groups, CA configuration parameters and CA deployment.

With that said, let’s get back to business. After pressing the Explore button, you should be directed to the Cluster Dashboard. For now, you’re only interested in looking at the nodes and the cluster’s capacity.

The next steps consist of defining node groups and carrying out the corresponding CA deployment. Start with the simplest and follow some best practices to create a namespace to deploy the components that make CA. To do this, go to Projects/Namespaces:

Create a new namespace

On the next screen, you can manage Rancher Projects and namespaces. Under Projects: System, click Create Namespace to create a new namespace part of the System project:

**Cluster Dashboard | Namespaces**

Give the namespace a name and select Create. Once the namespace is created, click on the icon shown here (ie import YAML):

Import YAML

One of the many advantages of Rancher is that it allows you to perform countless tasks from the UI. One such task is to import local YAML files or create them on the fly and deploy them to your Kubernetes cluster.

To take advantage of this useful feature, copy the following code. Remember to replace <PERSONAL_ACCESS_TOKEN> with the Linode token that you created for the tutorial:

---
apiVersion: v1
kind: Secret
metadata:
  name: cluster-autoscaler-cloud-config
  namespace: autoscaler
type: Opaque
stringData:
  cloud-config: |-
    [global]
    linode-token=<PERSONAL_ACCESS_TOKEN>
    lke-cluster-id=88612
    defaut-min-size-per-linode-type=1
    defaut-max-size-per-linode-type=5
    do-not-import-pool-id=88541

    [nodegroup "g6-standard-1"]
    min-size=1
    max-size=4

    [nodegroup "g6-standard-2"]
    min-size=1
    max-size=2

Next, select the namespace you just created, paste the code in Rancher and select Import:

Paste YAML

A pop-up window will appear, confirming that the resource has been created. Press Close to continue:

Confirmation

The secret you just created is how Linode implements the node group configuration that CA will use. This configuration defines several parameters, including the following:

  • linode-token: This is the same personal access token that you used to register LKE in Rancher.
  • lke-cluster-id: This is the unique identifier of the LKE cluster that you created with Rancher. You can get this value from the Linode console or by running the command curl -H "Authorization: Bearer $TOKEN" https://api.linode.com/v4/lke/clusters, where STOKEN is your Linode personal access token. In the output, the first field, id, is the identifier of the cluster.
  • defaut-min-size-per-linode-type: This is a global parameter that defines the minimum number of nodes in each node group.
  • defaut-max-size-per-linode-type: This is also a global parameter that sets a limit to the number of nodes that Cluster Autoscaler can add to each node group.
  • do-not-import-pool-id: On Linode, each node pool has a unique ID. This parameter is used to exclude specific node pools so that CA does not scale them.
  • nodegroup (min-size and max-size): This parameter sets the minimum and maximum limits for each node group. The CA for Linode implementation forces each node group to use the same node type. To get a list of available node types, you can run the command curl https://api.linode.com/v4/linode/types.

This tutorial defines two node groups, one using g6-standard-1 linodes (2GB nodes) and one using g6-standard-2 linodes (4GB nodes). For the first group, CA can increase the number of nodes up to a maximum of four, while for the second group, CA can only increase the number of nodes to two.

With the node group configuration ready, you can deploy CA to the respective namespace using Rancher. Paste the following code into Rancher (click on the import YAML icon as before):

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  name: cluster-autoscaler
  namespace: autoscaler
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources:
      - "namespaces"
      - "pods"
      - "services"
      - "replicationcontrollers"
      - "persistentvolumeclaims"
      - "persistentvolumes"
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '8085'
    spec:
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler-amd64:v1.26.1
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=2
            - --cloud-provider=linode
            - --cloud-config=/config/cloud-config
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
            - name: cloud-config
              mountPath: /config
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"
        - name: cloud-config
          secret:
            secretName: cluster-autoscaler-cloud-config

In this code, you’re defining some labels; the namespace where you will deploy the CA; and the respective ClusterRole, Role, ClusterRoleBinding, RoleBinding, ServiceAccount and Cluster Autoscaler.

The difference between cloud providers is near the end of the file, at command. Several flags are specified here. The most relevant include the following:

  • Cluster Autoscaler version v.
  • cloud-provider; in this case, Linode.
  • cloud-config, which points to a file that uses the secret you just created in the previous step.

Again, a cloud provider that uses a minimum number of flags is intentionally chosen. For a complete list of available flags and options, read the Cloud Autoscaler FAQ.

Once you apply the deployment, a pop-up window will appear, listing the resources created:

CA deployment

You’ve just implemented CA on Kubernetes, and now, it’s time to test it.

CA in action

To check to see if CA works as expected, deploy the following dummy workload in the default namespace using Rancher:

Sample workload

Here’s a review of the code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-workload
  labels:
    app: busybox
spec:
  replicas: 600
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        
        command: ['sh', '-c', 'echo Demo Workload ; sleep 600']

As you can see, it’s a simple workload that generates 600 busybox replicas.

If you navigate to the Cluster Dashboard, you’ll notice that the initial capacity of the LKE cluster is 220 pods. This means CA should kick in and add nodes to cope with this demand:

**Cluster Dashboard**

If you now click on Nodes (side menu), you will see how the node-creation process unfolds:

Nodes

New nodes

If you wait a couple of minutes and go back to the Cluster Dashboard, you’ll notice that CA did its job because, now, the cluster is serving all 600 replicas:

Cluster at capacity

This proves that scaling up works. But you also need to test to see scaling down. Go to Workload (side menu) and click on the hamburger menu corresponding to busybox-workload. From the drop-down list, select Delete:

Deleting workload

A pop-up window will appear; confirm that you want to delete the deployment to continue:

Deleting workload pop-up

By deleting the deployment, the expected result is that CA starts removing nodes. Check this by going back to Nodes:

Scaling down

Keep in mind that by default, CA will start removing nodes after 10 minutes. Meanwhile, you will see taints on the Nodes screen indicating the nodes that are candidates for deletion. For more information about this behavior and how to modify it, read “Does CA respect GracefulTermination in scale-down?” in the Cluster Autoscaler FAQ.

After 10 minutes have elapsed, the LKE cluster will return to its original state with one 2GB node and one 4GB node:

Downscaling completed

Optionally, you can confirm the status of the cluster by returning to the Cluster Dashboard:

**Cluster Dashboard**

And now you have verified that Cluster Autoscaler can scale up and down nodes as required.

CA, Rancher and managed Kubernetes services

At this point, the power of Cluster Autoscaler is clear. It lets you automatically adjust the number of nodes in your cluster based on demand, minimizing the need for manual intervention.

Since Rancher fully supports the Kubernetes Cluster Autoscaler API, you can leverage this feature on major service providers like AKS, Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS). Let’s look at one more example to illustrate this point.

Create a new workload like the one shown here:

New workload

It’s the same code used previously, only in this case, with 1,000 busybox replicas instead of 600. After a few minutes, the cluster capacity will be exceeded. This is because the configuration you set specifies a maximum of four 2GB nodes (first node group) and two 4GB nodes (second node group); that is, six nodes in total:

**Cluster Dashboard**

Head over to the Linode Dashboard and manually add a new node pool:

**Linode Dashboard**

Add new node

The new node will be displayed along with the rest on Rancher’s Nodes screen:

**Nodes**

Better yet, since the new node has the same capacity as the first node group (2GB), it will be deleted by CA once the workload is reduced.

In other words, regardless of the underlying infrastructure, Rancher makes use of CA to know if nodes are created or destroyed dynamically due to load.

Overall, Rancher’s ability to support Cluster Autoscaler out of the box is good news; it reaffirms Rancher as the ideal Kubernetes multi-cluster management tool regardless of which cloud provider your organization uses. Add to that Rancher’s seamless integration with other tools and technologies like Longhorn and Harvester, and the result will be a convenient centralized dashboard to manage your entire hyper-converged infrastructure.

Conclusion

This tutorial introduced you to Kubernetes Cluster Autoscaler and how it differs from other types of autoscaling, such as Vertical Pod Autoscaler (VPA) and Horizontal Pod Autoscaler (HPA). In addition, you learned how to implement CA on Kubernetes and how it can scale up and down your cluster size.

Finally, you also got a brief glimpse of Rancher’s potential to manage Kubernetes clusters from the convenience of its intuitive UI. Rancher is part of the rich ecosystem of SUSE, the leading open Kubernetes management platform. To learn more about other solutions developed by SUSE, such as Edge 2.0 or NeuVector, visit their website.

Driving Innovation with Extensible Interoperability in Rancher’s Spring ’23 Release

Tuesday, 18 April, 2023

We’re on a mission to build the industry’s most open, secure and interoperable Kubernetes management platform. Over the past few months, the team has made significant advancements across the entire Rancher portfolio that we are excited to share today with our community and customers.

Introducing the Rancher UI Extensions Framework

In November last year, we announced the release of v2.7.0, where we took our first steps into developing Rancher into a truly interoperable, extensible platform with the introduction of our extensions catalog. With the release of Rancher v2.7.2 today, we’re proud to announce that we’ve expanded our extensibility capabilities by releasing our ‘User Interface (UI) Extensions Framework.’

Users can now customize their Kubernetes experience. They can build on their Rancher platform and manage their clusters using custom, peer-developed or Rancher-built extensions.

Image 1: Installation of Kubewarden Extension

Image 1: Installation of Kubewarden Extension

Supporting this, we’ve also now made three Rancher-built extensions available, including:

  1. Kubewarden Extension delivers a comprehensive method to manage the lifecycle of Kubernetes policies across Rancher clusters.
  2. Elemental Extension provides operators with the ability to manage their cloud native OS and Edge devices from within the Rancher console.
  3. Harvester Extension helps operators load their virtualized Harvester cluster into Rancher to aid in management

Building a rich community remains our priority as we develop Rancher. That’s why as part of this release, the new UI Extensions Framework has also been implemented into the SUSE One Partner Program. Technology partners are key to our thriving ecosystem and by validating and supporting extensions, we’re eager to see the innovation from our partner community.

You can learn more about the new Rancher UI Extension Framework in this blog by Neil MacDougall, Director of UI/UX. Make sure to join him and our community team at our next Global Online Meetup as he deep dives into the UI Framework.

Adding more value to Rancher Prime and helping customers elevate their performance

In December 2022, we announced the launch of Rancher Prime – our new enterprise subscription, where we introduced the option to deploy Rancher from a trusted private registry. Today we announce the new components we’ve added to the subscription to help our customers improve their time-to-value across their teams, including:

  1. SLA-backed enterprise support for Policy and OS Management via the Kubewarden and Elemental Extensions
  2. The launch of the Rancher Prime Knowledgebase in our SUSE Collective customer loyalty programRancher Prime Knowledgebase

 

Image 2: Rancher Prime Knowledgebase in SUSE Collective

We’ve added these elements to help our customers improve their resiliency and performance across their enterprise-grade container workloads. Read this blog from Utsav Sanghani, Director of Product – Enterprise Container Management, for a detailed overview of the upgrades we made in Rancher and Rancher Prime and the value it derives for customers.

Empowering a community of Kubernetes innovators

Image 3: Rancher Academy Courses

Our community team also launched our free online education platform, Rancher Academy, at KubeCon Europe 2023. The cloud native community can now access expert-led courses on demand, covering important topics including fundamentals in containers, Kubernetes and Rancher to help accelerate their Kubernetes journey. Check out this blog from Tom Callway, Vice President of Product Marketing, as he shares in detail the launch and future outlook for Rancher Academy.

Achieving milestones across our open source projects

Finally, we’ve also made milestones across our innovation projects, including these updates:

Rancher Desktop 1.8 now includes configurable application behaviors such as auto-start at login. All application settings are configurable from the command line and experimental settings give access to Apple’s Virtualization framework on macOS Ventura.

Kubewarden 1.6.0 now allows DevSecOps teams to write Policy as Code using both traditional programming languages and domain-specific languages.

Opni 0.9 has several observability feature updates as it approaches its planned GA later in the year.

S3GW (S3 Gateway) 0.14.0 has new features such as lifecycle management, object locking and holds and UI improvements.

Epinio 1.7 now has a UI with Dex integration, the identity service that uses OpenID Connect to drive authentication for other apps, and SUSE’s S3GW.

Keep up to date with all our product release cadences on GitHub, or connect with your peers and us via Slack.

Utilizing the New Rancher UI Extensions Framework

Tuesday, 18 April, 2023

What are Rancher Extensions?

The Rancher by SUSE team wants to accelerate the pace of development and open Rancher to partners, customers, developers and users, enabling them to build on top of it to extend its functionality and further integrate it into their environments.

With Rancher Extensions, you can develop your own extensions to the Rancher UI. Completely independently of Rancher. The source code lives in your own repository. You develop, build and release it whenever you like. You can add your extension to Rancher at any time. Extensions are versioned by you and have their own independent release cycle.

Think Chrome browser extensions – but for Rancher.

Could this be the best innovation in Rancher for some time? It might just be!

What can you do?

Rancher defines several extension points which developers can take advantage of to provide extra functionality, for example:

  1. Add new UI screens to the top-level side navigation
  2. Add new UI screens to the navigation of the Cluster Explorer UI
  3. Add new UI for Kubernetes CRDs
  4. Extend existing views in Rancher Manager by adding panels, tabs and actions
  5. Customize the landing page

We’ll be adding more extension points over time.

Our goal is to enable deep integrations into Rancher. We know how important graphical user interfaces are to users, especially in helping users of all abilities to understand and manage complex technologies like Kubernetes. Being able to bring together data from different systems and visualize them within a single-pane-of-glass experience is extremely powerful for users.

With extensions, if you have a system that provides monitoring metadata, for example, we want to enable you to see that data in the context of where it is relevant – if you’re looking at a Kubernetes Pod, for example, we want you to be able to augment Rancher’s Pod view so you can see that data right alongside the Pod information.

Extensions, Extensions, Extensions

The Rancher by SUSE teams is using the Extensions mechanism to develop and deliver our own additions to Rancher – initially with extensions for Kubewarden and Elemental. We also use Extensions for our Harvester integration. Over time we’ll be adding more.

Over the coming releases, we will be refactoring the Rancher UI itself to use the extensions mechanism. We plan to build out and use the very same extension mechanism and APIs internally as externally developed extensions will use. This will help ensure those extension points deliver on the needs of developers and are fully supported and maintained.

Elemental

Elemental is a software stack enabling centralized, full cloud native OS management with Kubernetes.

With the Elemental extension for Rancher, we add UI capability for Elemental right into the Rancher user interface.

Image 1: Elemental Extension

The Elemental extension is an example of an extension that provides a new top-level “product” experience. It adds a new “OS Management” navigation item to the top-level navigation menu which leads to a new experience for managing Elemental. It uses the Rancher component library to ensure a consistent look and feel. Learn more here or visit the Elemental Extension GitHub repository

Kubewarden

Kubewarden is a policy engine for Kubernetes. Its mission is to simplify the adoption of policy-as-code.

The Kubewarden extension for Rancher makes it easy to install Kubewarden into a downstream cluster and manage Kubewarden and its policies right from within the Rancher Cluster Explorer user interface.

Image 2: Kubewarden Extension

The Kubewarden extension is a great example of an extension that adds to the Cluster Explorer experience. It also showcases how extensions can assist in simplifying the installation of additional components that are required to enable a feature in a cluster.

Unlike Helm charts, extensions have no parameters at install time – there’s nothing to configure – we want extensions to be super simple to install. Learn more here or visit the Kubewarden Extension GitHub repository.

Harvester

The Harvester integration into Rancher also leverages the UI Extensions framework. This enables the management of Harvester clusters right from within Rancher.

Because of the de-coupling that UI Extensions enables, the Harvester UI can be updated completely independently of Rancher. Learn more here or visit the Harvester UI GitHub repository.

Under the Hood

The diagram below shows a high-level overview of Rancher Extensions.

A lot of effort has gone into refactoring Rancher to modularize it and establish the API for extensions.

The end goal is to slim down the core of the Rancher Manager UI into a “Shell” into which Extensions are loaded. The functionality that is included by default will be split out into several “Built-in” extensions.

Image 3: Architecture for Rancher UI 

We are also in the process of splitting out and documenting our component library, so others can leverage it in their extensions to ensure a common look and feel.

A Rancher Extension is a packaged Vue library that provides functionality to extend and enhance the Rancher Manager UI. You’ll need to have some familiarity with Vue to build an extension, but anyone familiar with React or Angular should find it easy to get started.

Once an extension has been authored, it can be packaged up into a simple Helm chart, added to a Helm repository, and then easily installed into a running Rancher system.

Extensions are installed and managed from the new “Extensions” UI available from the Rancher slide-in menu:

Image 4: Rancher Extensions Menu

Rancher shows all the installed Extensions and the available extensions from the Helm repositories added. Extensions can also be upgraded, rolled back and uninstalled. Developers can also enable the ability to load extensions easily during development without the need to build and publish the extension to a Helm repository.

Developers

To help developers get started with Rancher extensions, we’ve published developer documentation, and we’re building out a set of example extensions.

Over time, we will be enhancing and simplifying some of our APIs, extending the documentation, and adding even more examples to help developers get started.

We have also set up a Slack channel exclusively for extensions – check out the #extensions channel on the Rancher User’s Slack.

Join the Party

We’re only just getting started with Rancher Extensions. We introduced them in Rancher 2.7. You can use them today and get started developing your own!

We want to encourage as many users, developers, customers and partners out there as possible to take a look and give them a spin. Join me on the 3rd of May at 11 am US EST where I’ll be going through the Extension Framework live as part of the Rancher Global Online Meetup – you can sign up here.

As we look ahead, we’ll be augmenting the Rancher extensions repository with a partner repository and a community repository to make it easier to discover extensions. Reach out to us via Slack if you have an extension you’d like included in these repositories.

Fasten your seat belts. This is just the beginning. We can’t wait to see what others do with Rancher Extensions!

Driving Innovation with Extensible Interoperability in Rancher’s Spring ’23 Release

Tuesday, 18 April, 2023

We’re on a mission to build the industry’s most open, secure and interoperable Kubernetes management platform. Over the past few months, the team has made significant advancements across the entire Rancher portfolio that we are excited to share today with our community and customers.

Introducing the Rancher UI Extensions Framework

In November last year, we announced the release of v2.7.0, where we took our first steps into developing Rancher into a truly interoperable, extensible platform with the introduction of our extensions catalog. With the release of Rancher v2.7.2 today, we’re proud to announce that we’ve expanded our extensibility capabilities by releasing our ‘User Interface (UI) Extensions Framework.’

Users can now customize their Kubernetes experience. They can build on their Rancher platform and manage their clusters using custom, peer-developed or Rancher-built extensions.

Image 1: Installation of Kubewarden Extension

Image 1: Installation of Kubewarden Extension

Supporting this, we’ve also now made three Rancher-built extensions available, including:

  1. Kubewarden Extension delivers a comprehensive method to manage the lifecycle of Kubernetes policies across Rancher clusters.
  2. Elemental Extension provides operators with the ability to manage their cloud native OS and Edge devices from within the Rancher console.
  3. Harvester Extension helps operators load their virtualized Harvester cluster into Rancher to aid in management

Building a rich community remains our priority as we develop Rancher. That’s why as part of this release, the new UI Extensions Framework has also been implemented into the SUSE One Partner Program. Technology partners are key to our thriving ecosystem and by validating and supporting extensions, we’re eager to see the innovation from our partner community.

You can learn more about the new Rancher UI Extension Framework in this blog by Neil MacDougall, Director of UI/UX. Make sure to join him and our community team at our next Global Online Meetup as he deep dives into the UI Framework.

Adding more value to Rancher Prime and helping customers elevate their performance

In December 2022, we announced the launch of Rancher Prime – our new enterprise subscription, where we introduced the option to deploy Rancher from a trusted private registry. Today we announce the new components we’ve added to the subscription to help our customers improve their time-to-value across their teams, including:

  1. SLA-backed enterprise support for Policy and OS Management via the Kubewarden and Elemental Extensions
  2. The launch of the Rancher Prime Knowledgebase in our SUSE Collective customer loyalty programRancher Prime Knowledgebase

 

Image 2: Rancher Prime Knowledgebase in SUSE Collective

We’ve added these elements to help our customers improve their resiliency and performance across their enterprise-grade container workloads. Read this blog from Utsav Sanghani, Director of Product – Enterprise Container Management, for a detailed overview of the upgrades we made in Rancher and Rancher Prime and the value it derives for customers.

Empowering a community of Kubernetes innovators

Image 3: Rancher Academy Courses

Our community team also launched our free online education platform, Rancher Academy, at KubeCon Europe 2023. The cloud native community can now access expert-led courses on demand, covering important topics including fundamentals in containers, Kubernetes and Rancher to help accelerate their Kubernetes journey. Check out this blog from Tom Callway, Vice President of Product Marketing, as he shares in detail the launch and future outlook for Rancher Academy.

Achieving milestones across our open source projects

Finally, we’ve also made milestones across our innovation projects, including these updates:

Rancher Desktop 1.8 now includes configurable application behaviors such as auto-start at login. All application settings are configurable from the command line and experimental settings give access to Apple’s Virtualization framework on macOS Ventura.

Kubewarden 1.6.0 now allows DevSecOps teams to write Policy as Code using both traditional programming languages and domain-specific languages.

Opni 0.9 has several observability feature updates as it approaches its planned GA later in the year.

S3GW (S3 Gateway) 0.14.0 has new features such as lifecycle management, object locking and holds and UI improvements.

Epinio 1.7 now has a UI with Dex integration, the identity service that uses OpenID Connect to drive authentication for other apps, and SUSE’s S3GW.

Keep up to date with all our product release cadences on GitHub, or connect with your peers and us via Slack.

Tags: ,, Category: Interoperability, News Comments closed

A Guide to Using Rancher for Multicloud Deployments

Wednesday, 8 March, 2023

Rancher is a Kubernetes management platform that creates a consistent environment for multicloud container operation. It solves several of the challenges around multicloud Kubernetes deployments, such as poor visibility into where workloads are running and the lack of centralized authentication and access control.

Multicloud improves resiliency by letting you distribute applications across providers. It can also be a competitive advantage since you’re able to utilize the benefits of every provider. Moreover, multicloud reduces vendor lock-in because you’re less dependent on any one platform.

However, these advantages are often negated by the difficulty in managing multi-cloud Kubernetes. Deploying multiple clusters, using them as one unit and monitoring the entire fleet are daunting tasks for team leaders. You need a way to consistently implement authorization, observability and security best practices.

In this article, you’ll learn how Rancher resolves these problems so you can confidently use Kubernetes in multi-cloud scenarios.

Rancher and multicloud

One of the benefits of Rancher is that it provides a consistent experience when you’re using several environments. You can manage the full lifecycle of all your clusters, whether they’re in the cloud or on-premises. It also abstracts away the differences between Kubernetes implementations, creating a single surface for monitoring your deployments.

Diagram showing how Rancher works with all Kubernetes distributions and cloud platforms courtesy of James Walker

Rancher is flexible enough to work with both new and existing clusters, and there are three possible ways to connect your clusters:

  1. Provision a new cluster using a managed cloud Kubernetes service:Rancher can create new Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE) clusters for you. The process is fully automated within the Rancher UI. You can also import existing clusters.
  2. Provision a new cluster on standalone cloud infrastructure: Rancher can deploy an RKE, RKE2, or K3s cluster by provisioning new compute nodes from your cloud provider. This option supports Amazon Elastic Compute Cloud (EC2), Microsoft Azure, DigitalOcean, Harvester, Linode and VMware vSphere.
  3. Bring your own cluster: You can manually connect Kubernetes clusters running locally or in other cloud environments. This gives you the versatility to combine on-premises and public cloud infrastructure in hybrid deployment situations.

Screenshot of adding a cluster in Rancher

Once you’ve added your multicloud clusters, your single Rancher installation lets you seamlessly manage them all.

A unified dashboard

One of the biggest multicloud headaches is tracking what’s deployed, where it’s located and whether it’s running correctly. With Rancher, you get a unified dashboard that shows every cluster, including the cloud environment it’s hosted in and its resource utilization:

Screenshot of the Rancher dashboard showing multiple clusters

Clusters screenshot

The Rancher home screen provides a centralized view of the clusters you’ve registered, covering both your cloud and on-premises deployments. Similarly, the sidebar integrates a shortcut list of clusters that helps you quickly move between environments.

After you’ve navigated to a specific cluster, the Cluster Dashboard page offers an at-a-glance view of capacity, utilization, events and deployments:

Screenshot of Rancher's **Cluster Dashboard**

Scrolling further down, you can view precise cluster metrics that help you analyze performance:

Screenshot of viewing cluster metrics in Rancher

Rancher lets you access vital monitoring data for all your Kubernetes environments within one tool, eliminating the need to log into individual cloud provider control panels.

Centralized authorization and access control

Kubernetes has built-in support for role-based access control (RBAC) to limit the actions that individual user accounts can take. However, this is insufficient for multicloud deployments because you have to manage and maintain your policies individually in each of your clusters.

Rancher improves multicloud Kubernetes usability by adding a centralized user authentication system. You can set up user accounts within Rancher or connect an external service using protocols such as LDAP, SAML and OAuth.

Once you’ve created your users, you can assign them specific access control rules to limit their rights within Rancher and your clusters. Global permissionsdefine how users can manage your Rancher installation. For instance, you can create and modify cluster connections while cluster- and project-level rolesconfigure the available actions after selecting a cluster.

To create a new user, click the menu icon in the top-left to expand the sidebar, then select the Users & Authentication link. Press the Create button on the next screen, where your existing users are displayed:

Screenshot of the Rancher UI

Fill out your new user’s credentials on the following screen:

Screenshot of creating a new user in Rancher

Then scroll down the page to begin assigning permissions to the new user.

Set the user’s global permissions, which control their overall level of access within Rancher. Then you can add more fine-grained policies for specific actions from the roles at the bottom. Once you’ve finished, click the Create button on the bottom-right to add the account. The user can now log into Rancher:

Screenshot of assigning a user's global roles in Rancher

Next, navigate to one of your clusters and head to Cluster > Cluster Membersin the sidebar. Click the Add button in the top-right to grant a user access to the cluster:

Screenshot of adding a cluster member in Rancher

Use the next screen to search for the user account, then set their role in the cluster. Once you press Create in the bottom-right, the user will be able to perform the cluster interactions you’ve assigned:

Screenshot of setting a cluster member's permissions in Rancher

Adding a cluster role

For more precise access control, you can set up your own roles that build upon Kubernetes RBAC. These can apply at the global (Rancher) level or within a specific cluster or project/namespace. All three are created in a similar way.

To create a cluster role, expand the Rancher sidebar again and return to the Users & Authentication page. Select the Roles link from the menu on the left and then select Cluster from the tab strip. Press the Create Cluster Rolebutton in the top-right:

Screenshot of Rancher's Cluster Roles interface

Give your role a name and enter an optional description. Next, use the Grant Resources interface to define the Kubernetes permissions the role includes. This example permits users to create and list pods in the cluster. Press the Create button to add your role:

Screenshot of defining a cluster role's permissions in Rancher

The role will now show up when you’re adding new members to your clusters:

Screenshot of selecting a custom cluster role for a cluster member in Rancher

Rancher and multicloud security

Rancher enhances multicloud security by providing active mechanisms for tightening your environments. Besides the security benefits of centralized authentication and RBAC, Rancher also integrates additional security measuresthat protect your clusters and cloud environments.

Rancher maintains a comprehensive hardening guide based on the Center for Internet Security (CIS) Benchmarks that help you implement best practices and identify vulnerabilities. You can scan a cluster against the benchmark from within the Rancher application.

To do so, navigate to your cluster, then expand Apps > Charts in the left sidebar. Select the CIS Benchmark chart from the list:

Screenshot of the CIS Benchmark app in Rancher's app list

Click the Install button on the next screen:

Screenshot of the CIS Benchmark app's details page in Rancher

Follow the steps to complete the installation in your cluster:

Screenshot of the CIS Benchmark app's installation screen in Rancher

It could take several minutes for the process to finish — you’ll see a “SUCCESS” message in the logs pane when it’s done:

Screenshot of the CIS Benchmark app's installation logs in Rancher

Now, navigate back to your cluster. You’ll find a new CIS Benchmark item in Rancher’s sidebar. Expand this menu and click the Scan link; then press the Create button on the page that appears:

Screenshot of the CIS Benchmark interface in Rancher

On the next screen, you’ll be prompted to select a scan profile. This defines the hardening checks that will be performed. You can change the default to choose a different benchmark or Kubernetes version. Press the Create button to start the scan:

Screenshot of creating a CIS Benchmark scan in Rancher

The scan run will then show in the Scans table back on the CIS Benchmark > Scan screen:

Screenshot of the CIS Benchmark **Scans** interface in Rancher, with a running scan displayed

Once it is finished, you can view the results in your browser by selecting the scan from the table:

Screenshot of viewing CIS Benchmark scan results in the Rancher UI

Rancher helps DevOps teams to scale multicloud environments

Multicloud is hard — more resources normally means higher overheads, a bigger attack surface and a rapidly swelling toolchain. These issues can impede you as you try to scale.

Rancher incorporates unique capabilities that help operators work effectively with different deployments, even when they’re distributed across several environments.

Automatic cluster backups provide safety

Rancher includes a backup system that you can install as an operator in your clusters. This operator backs up your Kubernetes API resources so you can recover from disasters.

You can add the operator by navigating to a cluster and choosing Apps > Charts from the side menu. Then find the Rancher Backups app and follow the prompts to install it:

Screenshot of the Rancher Backups app description in the Rancher interface

You’ll find the Rancher Backups item appear in the navigation menu. Click the Create button to define a new one-time or recurring backup schedule:

Screenshot of the **Backups** interface in Rancher

Fill out the details to configure your backup routine:

Screenshot of configuring a backup in Rancher

Once you’ve created a backup, you can restore it in the future if data gets accidentally deleted or a disaster occurs. With Rancher, you can create backups for all your clusters with a single consistent procedure, which produces more resilient environments.

Rancher integrates with multi-cloud solutions

One of the benefits of Rancher is that it’s built as a single platform for managing Kubernetes in any cluster. But it gets even better when combined with other ecosystem tools. Rancher has integrations with adjacent components that provide more focused support for specific use cases, including the following:

  • Longhorn is distributed Cloud native block storage that runs anywhere and supports automated provisioning, security and backups. You can deploy Longhorn to your clusters from within the Rancher UI, enabling more reliable storage for your workloads.
  • Harvester is a solution for hyperconverged infrastructure on bare-metal servers. It provides a virtual machine (VM) management system that complements Rancher’s capabilities for Kubernetes clusters. By combining Harvester and Rancher, you can effectively manage your on-premises clusters and the infrastructure that hosts them.
  • Helm is the standard package manager for Kubernetes applications. It packages an application’s Kubernetes manifests into a collection called a chart, ready to deploy with a single command. Rancher natively supports Helm charts and provides a convenient interface for deploying them into your cluster via its apps system.

By utilizing Rancher alongside other common tools, you can make multicloud Kubernetes even more powerful. Automated storage, local infrastructure management and packaged applications allow you to scale up freely without the hassle of manually provisioning environments and creating your app’s resources.

Deploy to large-scale environments with Rancher Fleet

Rancher also helps you deploy applications using automated GitOps methodologies. Rancher Fleet is a dedicated GitOps solution for containerized workloads that offers transparent visibility, flexible control and support for large-scale deployments to multiple environments.

Rancher Fleet manages your Kubernetes manifests, Helm charts and Kustomize templates for you, converting them into Helm charts that can automatically deploy in your clusters. You can set up Fleet in your Rancher installation by clicking the menu icon in the top-left and then choosing Continuous Delivery from the slide-out main menu:

Screenshot of the **Rancher Fleet** landing screen

Click Get started to connect your first Git repository and deploy it to your clusters. Once again, Rancher permits you to use standardized delivery workflows in all your environments. You’re no longer restricted to a single cloud vendor, delivery channel or platform as a service (PaaS):

Screenshot of creating a new Rancher Fleet Git repository connection

Conclusion

Multicloud presents new opportunities for more flexible and efficient deployments. Mixing solutions from several different cloud providers lets you select the best option for each of your components while avoiding the risk of vendor lock-in.

Nonetheless, organizations that use multicloud with containers and Kubernetes often experience operational challenges. It’s difficult to manage clusters that exist in several different environments, such as public clouds and on-premises servers. Moreover, implementing centralized monitoring, access control and security policies yourself is highly taxing.

Rancher solves these challenges by providing a single tool for provisioning infrastructure, installing Kubernetes and managing your deployments. It works with Google GKE, Amazon EKS, Azure AKS and your own clusters, making it the ultimate solution for achieving multicloud Kubernetes interoperability. Try Rancher today to provision and scale multicloud Kubernetes.

Stop the Churn with SUSE eLearning

Wednesday, 15 February, 2023

The Covid pandemic has taught us has brought us a lot of new phrases in the workforce. First, there was “quiet quitting” – the desire to do “just” what’s in your job description and no more.  Then there was the “great resignation” where it seemed like everyone was looking for their next opportunity.  That led to “quiet hiring,” where the people left in positions took on additional work — with no (or minimal) extra pay.

With all that being said, workplace churn is real.  So, what is a savvy manager to do to retain their top employees and keep their businesses running?

One answer that keeps coming up is the opportunity for continuous learning.  As the technology landscape changes, your technical professionals want to keep their skills both sharp and current.  But can you afford to send your top employees away for a week or more to satisfy their desire for technical knowledge?

Announcing SUSE eLearning for the Enterprise

SUSE has heard the requests from our customers and has expanded our eLearning offering to address the Enterprise needs.  Differing from our Individual eLearning tiers, the Enterprise tier addresses the modern workplace by providing:

  • Up to 5 eLearning subscriptions that you can swap amongst employees
  • Up to 1000 hours of lab time for hands-on experience
  • Up to 10 Certification exam vouchers

The best part – an eLearning subscription provides access to every SUSE technical training course.  And with new courses rolling out frequently, your team will be the first to learn about new technology as it happens.  Getting access to the latest information as soon as it’s released is not only good for your team but also for your business.

But What Is SUSE eLearning?

SUSE eLearning is training designed for today’s workforce.  It’s literally training your way: anywhere, anytime. With just one subscription, you get access to all the technical training for every SUSE product.

Interested in SUSE Manager, NeuVector, or Rancher Prime?  Take the appropriate courses to learn more.  Interested in Harvester?  We’ve got courses for that too.  Whether you are looking for a bite-sized video to solve a specific problem or a defined learning path that leads to certification, SUSE eLearning has exactly what you need to satisfy your employees and move your business forward.

SUSE eLearning has been in the market for over a year for your individual learners, but now there’s a tier defined specifically for the enterprise.

*Includes 200 hours of live labs per user with a maximum of 1000 hours during the subscription period. 10 certification exams per subscription; not per user.

So, while the market is churning, keep your employees engaged with continuous learning.  Make eLearning part of your plan to increase job satisfaction.  After all, your business is only as good as your people.  And your people are only good if they are engaged.  It’s up to you to stop the churn.

Learn more about SUSE eLearning Subscriptions and all the training offerings that SUSE offers by clicking here.

Using Hyperconverged Infrastructure for Kubernetes

Tuesday, 7 February, 2023

Companies face multiple challenges when migrating their applications and services to the cloud, and one of them is infrastructure management.

The ideal scenario would be that all workloads could be containerized. In that case, the organization could use a Kubernetes-based service, like Amazon Web Services (AWS), Google Cloud or Azure, to deploy and manage applications, services and storage in a cloud native environment.

Unfortunately, this scenario isn’t always possible. Some legacy applications are either very difficult or very expensive to migrate to a microservices architecture, so running them on virtual machines (VMs) is often the best solution.

Considering the current trend of adopting multicloud and hybrid environments, managing additional infrastructure just for VMs is not optimal. This is where a hyperconverged infrastructure (HCI) can help. Simply put, HCI enables organizations to quickly deploy, manage and scale their workloads by virtualizing all the components that make up the on-premises infrastructure.

That being said, not all HCI solutions are created equal. In this article, you’ll learn more about what an HCI is and then explore Harvester, an enterprise-grade HCI software that offers you unique flexibility and convenience when managing your infrastructure.

What is HCI?

Hyperconverged infrastructure (HCI) is a type of data center infrastructure that virtualizes computing, storage and networking elements in a single system through a hypervisor.

Since virtualized abstractions managed by a hypervisor replaces all physical hardware components (computing, storage and networking), an HCI offers benefits, including the following:

  • Easier configuration, deployment and management of workloads.
  • Convenience since software-defined data centers (SDDCs) can also be easily deployed.
  • Greater scalability with the integration of more nodes to the HCI.
  • Tight integration of virtualized components, resulting in fewer inefficiencies and lower total cost of ownership (TCO).

However, the ease of management and the lower TCO of an HCI approach come with some drawbacks, including the following:

  • Risk of vendor lock-in when using closed-source HCI platforms.
  • Most HCI solutions force all resources to be increased in order to increase any single resource. That is, new nodes add more computing, storage and networking resources to the infrastructure.
  • You can’t combine HCI nodes from different vendors, which aggravates the risk of vendor lock-in described previously.

Now that you know what HCI is, it’s time to learn more about Harvester and how it can alleviate the limitations of HCI.

What is Harvester?

According to the Harvester website, “Harvester is a modern hyperconverged infrastructure (HCI) solution built for bare metal servers using enterprise-grade open-source technologies including Kubernetes, KubeVirt and Longhorn.” Harvester is an ideal solution for those seeking a Cloud native HCI offering — one that is both cost-effective and able to place VM workloads on the edge, driving IoT integration into cloud infrastructure.

Because Harvester is open source, this automatically means you don’t have to worry about vendor lock-in. Furthermore, since it’s built on top of Kubernetes, Harvester offers incredible scalability, flexibility and reliability.

Additionally, Harvester provides a comprehensive set of features and capabilities that make it the ideal solution for deploying and managing enterprise applications and services. Among these characteristics, the following stand out:

  • Built on top of Kubernetes.
  • Full VM lifecycle management, thanks to KubeVirt.
  • Support for VM cloud-init templates.
  • VM live migration support.
  • VM backup, snapshot and restore capabilities.
  • Distributed block storage and storage tiering, thanks to Longhorn.
  • Powerful monitoring and logging since Harvester uses Grafana and Prometheus as its observability backend.
  • Seamless integration with Rancher, facilitating multicluster deployments as well as deploying and managing VMs and Kubernetes workloads from a centralized dashboard.

Harvester architectural diagram courtesy of Damaso Sanoja

Now that you know about some of Harvester’s basic features, let’s take a more in-depth look at some of the more prominent features.

How Rancher and Harvester can help with Kubernetes deployments on HCI

Managing multicluster and hybrid-cloud environments can be intimidating when you consider how complex it can be to monitor infrastructure, manage user permissions and avoid vendor lock-in, just to name a few challenges. In the following sections, you’ll see how Harvester, or more specifically, the synergy between Harvester and Rancher, can make life easier for ITOps and DevOps teams.

Straightforward installation

There is no one-size-fits-all approach to deploying an HCI solution. Some vendors sacrifice features in favor of ease of installation, while others require a complex installation process that includes setting up each HCI layer separately.

However, with Harvester, this is not the case. From the beginning, Harvester was built with ease of installation in mind without making any compromises in terms of scalability, reliability, features or manageability.

To do this, Harvester treats each node as an HCI appliance. This means that when you install Harvester on a bare-metal server, behind the scenes, what actually happens is that a simplified version of SLE Linux is installed, on top of which Kubernetes, KubeVirt, Longhorn, Multus and the other components that make up Harvester are installed and configured with minimal effort on your part. In fact, the manual installation process is no different from that of a modern Linux distribution, save for a few notable exceptions:

  • Installation mode: Early on in the installation process, you will need to choose between creating a new cluster (in which case the current node becomes the management node) or joining an existing Harvester cluster. This makes sense since you’re actually setting up a Kubernetes cluster.
  • Virtual IP: During the installation, you will also need to set an IP address from which you can access the main node of the cluster (or join other nodes to the cluster).
  • Cluster token: Finally, you should choose a cluster token that will be used to add new nodes to the cluster.

When it comes to installation media, you have two options for deploying Harvester:

It should be noted that, regardless of the deployment method, you can use a Harvester configuration file to provide various settings. This makes it even easier to automate the installation process and enforce the infrastructure as code (IaC) philosophy, which you’ll learn more about later on.

For your reference, the following is what a typical configuration file looks like (taken from the official documentation):

scheme_version: 1
server_url: https://cluster-VIP:443
token: TOKEN_VALUE
os:
  ssh_authorized_keys:
    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQAB...
    - github:username
  write_files:
  - encoding: ""
    content: test content
    owner: root
    path: /etc/test.txt
    permissions: '0755'
  hostname: myhost
  modules:
    - kvm
    - nvme
  sysctls:
    kernel.printk: "4 4 1 7"
    kernel.kptr_restrict: "1"
  dns_nameservers:
    - 8.8.8.8
    - 1.1.1.1
  ntp_servers:
    - 0.suse.pool.ntp.org
    - 1.suse.pool.ntp.org
  password: rancher
  environment:
    http_proxy: http://myserver
    https_proxy: http://myserver
  labels:
    topology.kubernetes.io/zone: zone1
    foo: bar
    mylabel: myvalue
install:
  mode: create
  management_interface:
    interfaces:
    - name: ens5
      hwAddr: "B8:CA:3A:6A:64:7C"
    method: dhcp
  force_efi: true
  device: /dev/vda
  silent: true
  iso_url: http://myserver/test.iso
  poweroff: true
  no_format: true
  debug: true
  tty: ttyS0
  vip: 10.10.0.19
  vip_hw_addr: 52:54:00:ec:0e:0b
  vip_mode: dhcp
  force_mbr: false
system_settings:
  auto-disk-provision-paths: ""

All in all, Harvester offers a straightforward installation on bare-metal servers. What’s more, out of the box, Harvester offers powerful capabilities, including a convenient host management dashboard (more on that later).

Host management

Nodes, or hosts, as they are called in Harvester, are the heart of any HCI infrastructure. As discussed, each host provides the computing, storage and networking resources used by the HCI cluster. In this sense, Harvester provides a modern UI that gives your team a quick overview of each host’s status, name, IP address, CPU usage, memory, disks and more. Additionally, your team can perform all kinds of routine operations intuitively just by right-clicking on each host’s hamburger menu:

  • Node maintenance: This is handy when your team needs to remove a node from the cluster for a long time for maintenance or replacement. Once the node enters the maintenance node, all VMs are automatically distributed across the rest of the active nodes. This eliminates the need to live migrate VMs separately.
  • Cordoning a node: When you cordon a node, it’s marked as “unschedulable,” which is useful for quick tasks like reboots and OS upgrades.
  • Deleting a node: This permanently removes the node from the cluster.
  • Multi-disk management: This allows adding additional disks to a node as well as assigning storage tags. The latter is useful to allow only certain nodes or disks to be used for storing Longhorn volume data.
  • KSMtuned mode management: In addition to the features described earlier, Harvester allows your team to tune the use of kernel same-page merging (KSM) as it deploys the KSM Tuning Service ksmtuned on each node as a DaemonSet.

To learn more on how to manage the run strategy and threshold coefficient of ksmtuned, as well as more details on the other host management features described, check out this documentation.

As you can see, managing nodes through the Harvester UI is really simple. However, your ops team will spend most of their time managing VMs, which you’ll learn more about next.

VM management

Harvester was designed with great emphasis on simplifying the management of VMs’ lifecycles. Thanks to this, IT teams can save valuable time when deploying, accessing and monitoring VMs. Following are some of the main features that your team can access from the Harvester Virtual Machines page.

Harvester basic VM management features

As you would expect, the Harvester UI facilitates basic operations, such as creating a VM (including creating Windows VMs), editing VMs and accessing VMs. It’s worth noting that in addition to the usual configuration parameters, such as VM name, disks, networks, CPU and memory, Harvester introduces the concept of the namespace. As you might guess, this additional level of abstraction is made possible by Harvester running on top of Kubernetes. In practical terms, this allows your Ops team to create isolated virtual environments (for example, development and production), which facilitate resource management and security.

Furthermore, Harvester also supports injecting custom cloud-init startup scripts into a VM, which speeds up the deployment of multiple VMs.

Harvester advanced VM management features

Today, any virtualization tool allows the basic management of VMs. In that sense, where enterprise-grade platforms like Harvester stand out from the rest is in their advanced features. These include performing VM backup, snapshot and restoredoing VM live migrationadding hot-plug volumes to running VMs; cloning VMs with volume data; and overcommitting CPU, memory and storage.

While all these features are important, Harvester’s ability to ensure the high availability (HA) of VMs is hands down the most crucial to any modern data center. This feature is available on Harvester clusters with three or more nodes and allows your team to migrate live VMs from one node to another when necessary.

Furthermore, not only is live VM migration useful for maintaining HA, but it is also a handy feature when performing node maintenance when a hardware failure occurs or your team detects a performance drop on one or more nodes. Regarding the latter, performance monitoring, Harvester provides out-of-the-box integration with Grafana and Prometheus.

Built-in monitoring

Prometheus and Grafana are two of the most popular open source observability tools today. They’re highly customizable, powerful and easy to use, making them ideal for monitoring key VMs and host metrics.

Grafana is a data-focused visualization tool that makes it easy to monitor your VM’s performance and health. It can provide near real-time performance metrics, such as CPU and memory usage and disk I/O. It also offers comprehensive dashboards and alerts that are highly configurable. This allows you to customize Grafana to your specific needs and create useful visualizations that can help you quickly identify issues.

Meanwhile, Prometheus is a monitoring and alerting toolkit designed for large-scale, distributed systems. It collects time series data from your VMs and hosts, allowing you to quickly and accurately track different performance metrics. Prometheus also provides alerts when certain conditions have been met, such as when a VM is running low on memory or disk space.

All in all, using Grafana and Prometheus together provide your team with comprehensive observability capabilities by means of detailed graphs and dashboards that can help them to identify why an issue is occurring. This can help you take corrective action more quickly and reduce the impact of any potential issues.

Infrastructure as Code

Infrastructure as code (IaC) has become increasingly important in many organizations because it allows for the automation of IT infrastructure, making it easier to manage and scale. By defining IT infrastructure as code, organizations can manage their VMs, disks and networks more efficiently while also making sure that their infrastructure remains in compliance with the organization’s policies.

With Harvester, users can define their VMs, disks and networks in YAML format, making it easier to manage and version control virtual infrastructure. Furthermore, thanks to the Harvester Terraform provider, DevOps teams can also deploy entire HCI clusters from scratch using IaC best practices.

This allows users to define the infrastructure declaratively, allowing operations teams to work with developer tools and methodologies, helping them become more agile and effective. In turn, this saves time and cost and also enables DevOps teams to deploy new environments or make changes to existing ones more efficiently.

Finally, since Harvester enforces IaC principles, organizations can make sure that their infrastructure remains compliant with security, regulatory and governance policies.

Rancher integration

Up to this point, you’ve learned about key aspects of Harvester, such as its ease of installation, its intuitive UI, its powerful built-in monitoring capabilities and its convenient automation, thanks to IaC support. However, the feature that takes Harvester to the next level is its integration with Rancher, the leading container management tool.

Harvester integration with Rancher allows DevOps teams to manage VMs and Kubernetes workloads from a single control panel. Simply put, Rancher integration enables your organization to combine conventional and Cloud native infrastructure use cases, making it easier to deploy and manage multi-cloud and hybrid environments.

Furthermore, Harvester’s tight integration with Rancher allows your organization to streamline user and system management, allowing for more efficient infrastructure operations. Additionally, user access control can be centralized in order to ensure that the system and its components are protected.

Rancher integration also allows for faster deployment times for applications and services, as well as more efficient monitoring and logging of system activities from a single control plane. This allows DevOps teams to quickly identify and address issues related to system performance, as well as easily detect any security risks.

Overall, Harvester integration with Rancher provides DevOps teams with a comprehensive, centralized system for managing both VMs and containerized workloads. In addition, this approach provides teams with improved convenience, observability and security, making it an ideal solution for DevOps teams looking to optimize their infrastructure operations.

Conclusion

One of the biggest challenges facing companies today is migrating their applications and services to the cloud. In this article, you’ve learned how you can manage Kubernetes and VM-based environments with the aid of Harvester and Rancher, thus facilitating your application modernization journey from monolithic apps to microservices.

Both Rancher and Harvester are part of the rich SUSE ecosystem that helps your business deploy multi-cloud and hybrid-cloud environments easily across any infrastructure. Harvester is an open source HCI solution. Try it for free today.