Cloud Computing vs. Edge Computing

Tuesday, 13 February, 2024

Cloud Computing vs. Edge Computing: What’s the Difference?

Cloud vs Edge Computing

Introduction to Edge Computing Vs. Cloud Computing

In the dynamic world of digital technology, Cloud Computing and Edge Computing have emerged as pivotal paradigms, reshaping how businesses approach data and application management. While they might appear similar at first glance, these two technologies serve different purposes, offering unique advantages. SUSE, a global leader in open source software, including Linux products, plays a significant role in this technological shift, offering solutions that cater to both cloud and edge computing needs. Understanding the distinctions between Cloud Computing and Edge Computing is crucial for businesses, especially those looking to leverage these technologies for enhanced operational efficiency.

The Role of Cloud Computing

Cloud Computing solutions, a cornerstone of modern IT infrastructure, involve processing and storing data on remote servers accessed via the Internet. This approach offers remarkable scalability and flexibility, allowing businesses to handle vast data volumes without the need for substantial physical infrastructure. Cloud services like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform exemplify this model. SUSE complements this ecosystem with its public cloud solutions, providing a secure, scalable, and open source platform that integrates seamlessly with major cloud providers.

The Emergence of Edge Computing

Edge Computing, in contrast, processes data closer to where it is generated, reducing latency and enhancing real-time data processing capabilities. This technology is vital in applications requiring immediate data analysis, such as IoT devices and smart city infrastructure. SUSE acknowledges the importance of Edge Computing, offering tailored Linux-based solutions that facilitate local data processing, ensuring speed and efficiency in data-sensitive operations.

Synergistic Approach

It’s essential to recognize that Cloud and Edge Computing are not mutually exclusive but often work in tandem. Many enterprises use a hybrid model, employing the cloud for extensive data processing and storage, while utilizing edge computing for real-time, localized tasks. SUSE supports this hybrid approach with its range of products, ensuring businesses can leverage both technologies for a comprehensive, efficient IT infrastructure.

What is the Difference Between Edge and Cloud Computing?

While both Edge and Cloud Computing are integral to modern technology infrastructure, they serve distinct purposes and operate on different principles. Their differences lie primarily in how and where data processing occurs, their latency, and their application in various scenarios.

Location of Data Processing

The most significant difference between Cloud and Edge Computing is the location of data processing. In Cloud Computing, data is sent to and processed in remote servers, often located far from the data source. This centralized processing can handle massive amounts of data, making it ideal for complex computations and large-scale data analysis.

Edge Computing, in contrast, processes data close to where it is generated. Devices at the “edge” of the network, like smartphones, industrial machines, or sensors, perform the processing. This proximity reduces the need to send data across long distances, thereby minimizing latency.

Latency and Speed

Latency is another critical differentiator. Cloud Computing can sometimes experience higher latency due to the time taken for data to travel to and from distant servers. This delay, although often minimal, can be critical in applications requiring real-time data processing, such as in autonomous vehicles or emergency response systems.

Edge Computing significantly reduces latency by processing data locally. This immediacy is crucial in time-sensitive applications where even a small delay can have significant consequences.

Application Scenarios

Cloud Computing is best suited for applications that require significant processing power and storage capacity but are less sensitive to latency. It’s ideal for big data analytics, web-based services, and extensive database management.

Edge Computing, on the other hand, is tailored for scenarios where immediate data processing is vital. It’s used in IoT devices, smart cities, healthcare monitoring systems, and real-time data processing tasks.

In summary, while Cloud Computing excels in centralized, large-scale data processing, Edge Computing stands out in localized, real-time data handling. Businesses often leverage both to maximize efficiency, security, and performance in their digital operations.

What Are the Advantages of Edge Computing over Cloud Computing?

Edge Computing, while not a replacement for Cloud Computing, offers unique advantages in specific contexts. Its benefits are particularly pronounced in scenarios where speed, bandwidth, and data locality are of paramount importance. As a leader in open source software solutions, SUSE recognizes these advantages and integrates them into its products, ensuring businesses can leverage the best of Edge Computing in their operations.

Reduced Latency

The most significant advantage of Edge Computing is its ability to drastically reduce latency. By processing data near its source, edge devices deliver faster response times, essential for applications like autonomous vehicles, real-time analytics, and industrial automation. SUSE’s edge solutions are designed to support these low-latency requirements, enabling real-time decision-making and improved operational efficiency.

Bandwidth Optimization

Edge Computing minimizes the data that needs to be transferred over the network, reducing bandwidth usage and associated costs. This is particularly beneficial for businesses operating in bandwidth-constrained environments. SUSE’s edge-focused products enhance this efficiency, ensuring seamless operation even with limited bandwidth.

Enhanced Security

By processing data locally, Edge Computing can also offer enhanced security. SUSE’s edge solutions capitalize on this by providing robust security features, ensuring data integrity and protection against external threats, especially in sensitive industries like healthcare and finance.

Improved Reliability

Edge Computing provides improved reliability, especially in situations where constant connectivity to a central cloud server is challenging. SUSE’s edge solutions are engineered to maintain functionality even in disconnected or intermittently connected environments, ensuring continuous operation.

Customization and Flexibility

SUSE’s approach to Edge Computing emphasizes customization and flexibility. Their Linux-based edge solutions can be tailored to specific industry needs, allowing businesses to optimize their edge infrastructure in alignment with their unique operational requirements.

What Role Does Cloud Computing Play in Edge AI?

Cloud Computing and Edge AI (Artificial Intelligence) are two technological trends that are rapidly converging, each playing a pivotal role in the evolution of the other. This synergy is especially apparent in the solutions offered by SUSE, a leader in open source software, which has been instrumental in integrating Cloud Computing with Edge AI applications.

Complementary Technologies

In the realm of Edge AI, Cloud Computing serves as a complementary technology. It provides the substantial computational power and storage capacity necessary for training complex AI models. These models, once trained in the cloud, can be deployed at the edge, where they perform real-time data processing and decision-making. This approach leverages the cloud’s robustness and the edge’s immediacy, making for an efficient, scalable AI solution.

SUSE’s Edge AI Support

SUSE has recognized this interplay and offers specialized Edge AI support that integrates seamlessly with cloud environments. SUSE’s range of Cloud Native solutions, with its Linux offerings, provides optimal support for the execution of AI workloads at the Edge.

Data Management and Analytics

Cloud Computing also plays a crucial role in managing and analyzing the vast amounts of data generated by Edge AI devices. SUSE’s cloud solutions facilitate the aggregation, storage, and analysis of this data, providing valuable insights that can be used to further refine AI models and improve edge device performance. This continuous cycle of data flow between the edge and the cloud enhances the overall effectiveness and accuracy of Edge AI applications.

Enhanced Security and Scalability

Security and scalability are critical in Edge AI, and Cloud Computing addresses these concerns effectively. SUSE’s cloud and edge solutions offer robust security features, safeguarding data as it moves between the edge and the cloud. Additionally, the scalability of cloud infrastructure ensures that as the number of edge devices grows, the system can adapt and manage the increased data load and processing demands without compromising performance.

Collaboration for Innovation

SUSE fosters a collaborative ecosystem where Cloud Computing and Edge AI coexist and complement each other. By utilizing open source technologies, SUSE encourages innovation, allowing businesses to customize and scale their solutions according to their specific needs. This flexibility is vital for companies looking to stay ahead in the rapidly evolving tech landscape, where the integration of Cloud Computing and Edge AI is becoming increasingly crucial.

How SUSE Can Help

In the ever-changing landscape of digital technology, businesses face the challenge of adopting and integrating complex computing paradigms like Cloud Computing and Edge Computing. SUSE, as a leading provider of open source software solutions, stands at the forefront of this technological revolution, offering a suite of products and services designed to help businesses navigate and leverage these technologies effectively. SUSE’s Edge solution is a key component of this suite, specifically tailored to address the unique demands of Edge Computing.

Tailored Solutions for Diverse Needs

SUSE understands that each business has unique requirements and challenges. To address this, SUSE offers a range of tailored solutions, including SUSE Linux Enterprise, Rancher Prime, and SUSE Edge. These products are designed to cater to different aspects of both Cloud and Edge Computing, ensuring that businesses of all sizes and sectors can find a solution that fits their specific needs.

SUSE Linux Enterprise

SUSE Linux Enterprise is a versatile and robust platform that provides the foundation for both cloud and edge environments. It offers exceptional security, scalability, and reliability, making it ideal for businesses looking to build and manage their cloud infrastructure or deploy applications at the edge.

SLE Micro

SLE Micro, a key offering in this suite, is a lightweight and secure operating system optimized for edge computing environments. It is designed to provide a minimal footprint, which is crucial for edge devices with limited resources. SLE Micro’s robust security features, including secure boot and transactional updates, ensure high reliability and stability, which are essential in the edge’s often challenging operational environments. This makes SLE Micro an ideal choice for businesses looking to deploy applications in edge locations, where resources are constrained and robustness is key.

Rancher Prime

Rancher Prime is an open source container management platform that simplifies the deployment and management of Kubernetes at scale. With Rancher, businesses can efficiently manage their containerized applications across both cloud and edge environments, ensuring seamless operation and integration.

SUSE Edge

SUSE Edge is specifically designed for edge computing scenarios. It provides a lightweight, secure, and easy-to-manage platform, perfect for edge devices and applications. SUSE Edge supports a range of architectures and is optimized for performance in low-bandwidth or disconnected environments.

Enhanced Security and Compliance

In today’s digital world, security and compliance are top priorities. SUSE’s solutions are built with security at their core, offering features like regular updates, security patches, and compliance tools. These features ensure that businesses can protect their data and infrastructure against the latest threats and meet regulatory standards.

Open Source Flexibility and Innovation

As an advocate of open source technology, SUSE offers unparalleled flexibility and access to innovation. Businesses using SUSE products can benefit from the collaborative and innovative nature of the open source community. This access to a broad pool of resources and expertise allows for rapid adaptation to new technologies and market demands.

Scalability and Reliability

SUSE’s solutions are designed to be scalable and reliable, ensuring that businesses can grow and adapt without worrying about their infrastructure. Whether scaling up cloud resources or expanding edge deployments, SUSE’s products provide a stable and scalable foundation.

Expert Support and Services

SUSE offers comprehensive support and services to assist businesses at every step of their technology journey. From initial consultation and deployment to ongoing management and optimization, SUSE’s team of experts is available to provide guidance and support. This service ensures that businesses can maximize the value of their investment in SUSE products.

Empowering Digital Transformation

By choosing SUSE, businesses position themselves at the cutting edge of digital transformation. SUSE’s solutions enable seamless integration of cloud and edge computing, facilitating new capabilities like real-time analytics, IoT, and AI-driven applications. This integration drives efficiency, innovation, and competitive advantage.

In conclusion, SUSE’s range of products and services offers businesses the tools they need to effectively embrace and integrate Cloud and Edge Computing into their operations. With SUSE, businesses gain a partner equipped to help them navigate the complexities of modern technology, ensuring they stay ahead in a rapidly evolving digital landscape.

 

How Does Kubernetes Work?

Wednesday, 7 February, 2024

How Does Kubernetes Work? An In-Depth Overview for Beginners

Introduction

In today’s rapidly evolving digital landscape, understanding Kubernetes has become essential for anyone involved in the world of software development and IT operations. Kubernetes, often abbreviated as K8s, is an open source platform designed to automate deploying, scaling, and operating application containers. Its rise to prominence is not just a trend but a significant shift in how applications are deployed and managed at scale.

Why Understanding Kubernetes is Important

For beginners, Kubernetes can seem daunting; its ecosystem is vast, and its functionality is complex. However, diving into Kubernetes is more than a technical exercise—it’s a necessary step for those looking to stay ahead in the tech industry. Whether you’re a developer, a system administrator, or someone curious about container orchestration, understanding Kubernetes opens doors to modern cloud-native technologies.

The importance of Kubernetes stems from its ability to streamline deployment processes, enhance scalability, and improve the reliability and efficiency of applications. It’s not just about managing containers; it’s about embracing a new paradigm in application deployment and management. With companies of all sizes adopting Kubernetes, knowledge of this platform is becoming a key skill in many IT roles.

In this article, we will explore the fundamentals of Kubernetes: how it works, its core components, and why it’s become an indispensable tool in modern software deployment. Whether you’re starting from scratch or looking to solidify your understanding, this overview will provide the insights you need to grasp the basics of Kubernetes.

Kubernetes Basics

What is Kubernetes?

Kubernetes is an open source platform designed to automate the deployment, scaling, and operation of application containers. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation.

Definition and Purpose

At its core, Kubernetes is a container orchestration system. It manages the lifecycle of containerized applications and services, ensuring they run efficiently and reliably. The main purpose of Kubernetes is to facilitate both declarative configuration and automation for application services. It simplifies the process of managing complex, containerized applications, making it easier to deploy and scale applications across various environments.

The Evolution of Kubernetes

Kubernetes has evolved significantly since its inception. It was born from Google’s experience running production workloads at scale with a Borg system. This evolution reflects the growing need for scalable and resilient container orchestration solutions in the industry.

Key Concepts

Containers and Container Orchestration

Containers are lightweight, standalone packages that contain everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. Container orchestration is the process of automating the deployment, management, scaling, networking, and availability of container-based applications.

Node and Pod Cycle

 

Nodes and Clusters

A Kubernetes cluster consists of at least one master node and multiple worker nodes. The nodes are machines (VMs or physical servers) that run applications and workloads as containers. A cluster is a set of nodes that work together to run containerized applications.

Pods and Services

A pod is the smallest deployable unit in Kubernetes, often containing one or more containers. Services in Kubernetes are an abstraction that defines a logical set of pods and a policy by which to access them, often through a network.

Components of Kubernetes

Kubernetes is an intricate system made up of several components working in harmony to provide a seamless method of deploying and managing containerized applications. Understanding these components is crucial for anyone looking to utilize Kubernetes effectively.

Kubernetes Cluster

Master Node

The Master Node is the heart of the Kubernetes architecture, responsible for the global management of the cluster. It makes decisions about the cluster (such as scheduling applications), detecting and responding to cluster events (like starting up a new pod when a deployment’s replicas field is unsatisfied).

Control Plane Overview

The Control Plane is a collection of processes that control Kubernetes nodes. This is where all task assignments originate. It includes the Kubernetes Master and kube-system namespace components that run on the master node. The Control Plane’s main function is to maintain the desired state of the cluster, as defined by the Kubernetes API.

API Server, Scheduler, and Controller Manager

  • API Server: The API Server is a key component and serves as the front end for the Kubernetes control plane. It is the only Kubernetes component that connects with the cluster’s shared data store etc.
  • Scheduler: The Scheduler watches for newly created Pods with no assigned node and selects a node for them to run on.
  • Controller Manager: This component runs controller processes, which are background threads that handle routine tasks in the cluster.
  • etcd: Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.

Worker Nodes

Worker nodes run the applications and workloads. Each worker node includes the services necessary to manage the lifecycle of Pods, managed by the control plane. A Kubernetes cluster typically has several worker nodes.

Understanding Node Agents

Node agents, or “Kubelets,” are agents that run on each node in the cluster. They ensure that containers are running in a Pod and communicate with the Master Node, reporting back on the health of the host it is running on.

Container Runtime (docker)

The Container Runtime is the software responsible for running containers. Kubernetes is compatible with several runtimes including containerd, CRI-O, I.E.. ‘Containerd is the Container Runtime that docker uses.

Deploying Applications in Kubernetes

Deploying applications in Kubernetes is a structured and systematic process, involving several key concepts and tools. Understanding these elements is crucial for efficient and scalable application deployments.

Creating Deployments

Deployments are one of the most common methods for deploying applications in Kubernetes. They describe the desired state of an application, such as which images to use, how many replicas of the application should be running, and how updates should be rolled out. Deployments are managed through Kubernetes’ declarative API, which allows users to specify their desired state, and the system works to maintain that state.

Understanding Pods and ReplicaSets

  • Pods: A Pod is the basic execution unit of a Kubernetes application. Each Pod represents a part of a workload that is running on your cluster. A pod typically contains or or more containers.
  • ReplicaSets: A ReplicaSet ensures that a specified number of pod replicas (duplicate or copy) are running at any given time. It is often used to guarantee the availability of a specified number of identical Pods.

YAML Configuration Files

Kubernetes objects are often defined and managed using YAML configuration files. These files provide a template for creating necessary components like Deployments, Services, and Pods. A typical YAML file for a Kubernetes deployment includes specifications like the number of replicas, container images, resource requests, and limits.

Scaling Applications

Scaling is a critical aspect of application deployment, ensuring that applications can handle varying loads efficiently.

  • Horizontal Scaling: This involves increasing or decreasing the number of replicas in a deployment. Kubernetes makes this easy through the ReplicaSet controller, which manages the number of pods based on the specifications in the deployment.
  • Vertical Scaling: This refers to adding more resources to existing pods, such as CPU and memory.

Auto Scaling

Kubernetes also supports automatic scaling, where the number of pod replicas in a deployment can be automatically adjusted based on CPU usage or other select metrics. This is achieved through the Horizontal Pod Autoscaler, which monitors the load and automatically scales the number of pod replicas up or down.

Service Discovery and Load Balancing in Kubernetes

In Kubernetes, service discovery and load balancing are fundamental for directing traffic and ensuring that applications are accessible and efficient. Understanding these concepts is key to managing Kubernetes applications effectively.

Services in Kubernetes

Services in Kubernetes are an abstraction that defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. There are several types of Services in Kubernetes:

  • ClusterIP: This is the default Service type, which provides a service inside the Kubernetes cluster. It assigns a unique internal IP address to the service, making it only reachable within the cluster.
  • NodePort: Exposes the service on each Node’s IP at a static port. It makes a service accessible from outside the Kubernetes cluster by adding a port to the Node’s IP address.
  • LoadBalancer: This service integrates with supported cloud providers’ load balancers to distribute external traffic to the Kubernetes pods.
  • ExternalName: Maps the service to the contents of the externalName field (e.g., foo.bar.example.com), by returning a CNAME record with its value.

How Services Work

Services in Kubernetes work by monitoring constantly which Pods are in a healthy state and ready to receive traffic. They direct requests to appropriate Pods, thereby ensuring high availability and effective load distribution.

Ingress Controllers

Ingress Controllers in Kubernetes are used for routing external HTTP/HTTPS traffic to services within the cluster. They provide advanced traffic routing capabilities and are responsible for handling ingress, which is the entry point for external traffic into the Kubernetes cluster.

Routing Traffic to Services

Routing traffic in Kubernetes is primarily handled through services and ingress controllers. Services manage internal traffic, while ingress controllers manage external traffic.

TLS Termination

TLS Termination refers to the process of terminating the TLS connection at the ingress controller or load balancer level. It offloads the SSL decryption process from the application Pods, allowing the ingress controller or load balancer to handle the TLS encryption and decryption.

Kubernetes Networking

Understanding networking in Kubernetes is crucial for ensuring efficient communication between containers, pods, and external services. Kubernetes networking addresses four primary requirements: container-to-container communication, pod-to-pod communication, pod-to-service communication, and external-to-service communication. For more information regarding Kubernetes Networking check out our “Deep Dive into Kubernetes Networking” white paper.

Container Networking

In Kubernetes, each Pod is assigned a unique IP address. Containers within a Pod share the same network namespace, meaning they can communicate with each other using localhost. This approach simplifies container communication and port management.

Pod-to-Pod Communication

Pods need to communicate with each other, often across different nodes. Kubernetes ensures that this communication is seamless, without the need for NAT. The network model of Kubernetes dictates that every Pod should be able to reach every other Pod in the cluster using their IP addresses.

Network Policies

Network policies in Kubernetes allow you to control the traffic between pods. They are crucial for enforcing a secure environment by specifying which pods can communicate with each other and with other network endpoints.

Cluster Networking

For cluster-wide networking, Kubernetes supports various networking solutions like Flannel, Calico, and Weave. Each of these offers different features and capabilities:

  • Flannel: A simple and easy-to-set-up option that provides a basic overlay network for Kubernetes.
  • Calico: Offers more features including network policies for security.
  • Weave: Provides a resilient and simple network solution for Kubernetes, with built-in network policies.

These network plugins are responsible for implementing the Kubernetes networking model and ensuring pods can communicate with each other efficiently.

Networking Challenges and Solutions

Kubernetes networking can present challenges such as ensuring network security, managing complex network topologies, and handling cross-node communication. Solutions like network policies, service meshes, and choosing the right network plugin can help mitigate these challenges, ensuring a robust and secure network within the Kubernetes environment.

Managing Storage in Kubernetes

Effective storage management is a critical component of Kubernetes, enabling applications to store and manage data efficiently. Kubernetes offers various storage options, ensuring data persistence and consistency across container restarts and deployments.

Volumes and Persistent Storage

In Kubernetes, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. Volumes solve the problem of data persistence in containers, which are otherwise ephemeral by nature. When a container restarts or is replaced, the data is retained and reattached to the new container, ensuring data persistence. For more information, refer to the official Kubernetes documentation on storage

Understanding Volume Types

Kubernetes supports several types of volumes:

  • EmptyDir: A simple empty directory used for storing transient data. It’s initially empty and all containers in the pod can read and write to it.
  • HostPath: Used for mounting directories from the host node’s filesystem into a pod.
  • NFS: Mounts an NFS share into the pod.
    • CSI (Container Storage Interface)– Makes it easy to expand k8s’ beyond the built in Storage capabilities. It can add over 80 additional storage devices nfs is not part of K8s storage but is added on via CSI.
  • PersistentVolume (PV): Allows a user to abstract the details of how the storage is provided and how it’s consumed.
  • ConfigMap and Secret: Used for injecting configuration data and secrets into pods.

Each type serves different use cases, from temporary scratch space to long-term persistent storage.

Data Persistence in Containers

Data persistence is key in containerized environments. Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are Kubernetes resources that allow data to persist beyond the lifecycle of a single pod, ensuring data is not lost when pods are cycled.

Storage Classes

StorageClasses in Kubernetes allows administrators to define different classes of storage, each with its own service level, backup policy, or disk type. This abstraction allows users to request a certain type of storage without needing to know the details of the underlying infrastructure.

Provisioning and Management

Storage provisioning in Kubernetes can be either static or dynamic:

  • Static Provisioning: A cluster administrator creates several PVs. They carry the details of the real storage, which is available for use by cluster users.
  • Dynamic Volume Provisioning: Allows storage volumes to be created on-demand. This avoids the need for cluster administrators to pre-provision storage, and users can request storage dynamically when needed.

Dynamic provisioning is particularly useful in large-scale environments where managing individual storage volumes and claims can be cumbersome.

High Availability and Fault Tolerance in Kubernetes

Kubernetes is designed to offer high availability and fault tolerance for applications running in a cluster, making it an ideal platform for mission-critical applications. These features are achieved through a combination of replication, self-healing mechanisms, and automated management of containerized applications.

Replication Controllers

ReplicationControllers are key components in Kubernetes that ensure a specified number of pod replicas are running at any given time. This not only helps in providing high availability but also aids in load balancing and scaling. If a pod fails, the Replication Controller replaces it, ensuring that the desired number of pods is always maintained.

Ensuring Redundancy

Redundancy is a fundamental aspect of high availability. Kubernetes achieves redundancy by running multiple instances of an application (pods), typically across different nodes. This approach ensures that if one instance fails, other instances can continue to serve user requests, minimizing downtime.

Handling Failures

Kubernetes is designed to handle failures gracefully. It continuously monitors the health of nodes and pods. If a node or pod fails, Kubernetes automatically reschedules the pods to healthy nodes, ensuring the application remains available and accessible.

Self-Healing

Self-healing is one of the most powerful features of Kubernetes. It automatically replaces or restarts containers that fail- reschedules containers when nodes die, kills containers that don’t respond to user-defined health checks, and doesn’t advertise them to clients until they are ready to serve.

Automatic Container Restart

Kubernetes’ ability to automatically restart containers that have failed is crucial for maintaining application continuity. This is managed by the kubelet on each node, which keeps track of the containers running on the node and restarts them if they fail.

Resilience in Kubernetes

Resilience in Kubernetes is not just about keeping applications running, but also about maintaining their performance levels. This involves strategies like rolling updates and canary deployments, which allow for updates and changes without downtime or service disruption.

Monitoring and Logging in Kubernetes

Effective monitoring and logging are essential for maintaining the health and performance of applications running in Kubernetes. They provide insights into the operational aspects of the applications and the Kubernetes clusters, enabling quick identification and resolution of issues.

Kubernetes Monitoring Tools

Kubernetes offers several monitoring tools that provide comprehensive visibility into both the cluster’s state and the applications running on it:

  • Prometheus: An open source monitoring and alerting toolkit widely used in the Kubernetes ecosystem. It’s known for its powerful data model and query language, as well as its ease of integration with Kubernetes.
  • Grafana: Often used in conjunction with Prometheus, Grafana provides advanced visualization capabilities for the metrics collected by Prometheus.
  • Heapster: Although now deprecated, it was traditionally used for cluster-wide aggregation of monitoring and event data.

cAdvisor: Integrated into the Kubelet, it provides container users with an understanding of the resource usage and performance characteristics of their running containers.

Application Insights

Gaining insights into applications running in Kubernetes involves monitoring key metrics such as response times, error rates, and resource utilization. These metrics help in understanding the performance and health of the applications and in making informed decisions for scaling and management.

Logging Best Practices

Effective logging practices in Kubernetes are crucial for troubleshooting and understanding application behavior. Best practices include:

  • Ensuring Log Consistency: Logs should be consistent and structured, making them easy to search and analyze.
  • Separation of Concerns: Different types of logs (like application logs, and system logs) should be separated to simplify management and analysis.
  • Retention Policies: Implementing log retention policies to balance between storage costs and the need for historical data for analysis.

Centralized Logging

In a distributed environment like Kubernetes, centralized logging is essential. It involves collecting logs from all containers and nodes and storing them in a central location. Tools like ELK Stack (Elasticsearch, Logstash, and Kibana) or EFK Stack (Elasticsearch, Fluentd, Kibana) are commonly used for this purpose. Centralized logging makes it easier to search and analyze logs across the entire cluster, providing a unified view of the logs.

Security in Kubernetes

Security is paramount in the realm of Kubernetes, as it deals with complex, distributed systems often running critical workloads. Kubernetes provides several mechanisms to enhance the security of applications and the cluster. Container security platforms like SUSE’s Neuvector are extremely important.

Role-Based Access Control (RBAC)

Role-Based Access Control (RBAC) in Kubernetes is a method for regulating access to computer or network resources based on the roles of individual users within an enterprise. RBAC allows admins to define roles with specific permissions and assign these roles to users, groups, or service accounts. This ensures that only authorized users and applications have access to certain resources.

User and Service Account Management

In Kubernetes, user accounts are for humans, while service accounts are for processes in pods. Managing these accounts involves creating and assigning appropriate permissions to ensure minimal access rights based on the principle of least privilege.

Authorization Policies

Kubernetes supports several types of authorization policies, such as Node, ABAC, RBAC, and Webhook. These policies control who can access the Kubernetes API and what operations they can perform on different resources.

Pod Security Standards

Pod Security Standards (PSS) are a set of predefined configurations for Kubernetes pods that provide different levels of security. These standards are part of Kubernetes, a popular open source platform for automating the deployment, scaling, and management of containerized applications. PSS is designed to provide a clear framework for securing pods in a Kubernetes environment.

The Pod Security Standards are divided into multiple levels, typically including:

  • Baseline: The default level that provides minimal security requirements and is meant to ensure that the pod does not compromise the security of the entire cluster. It’s suitable for applications that need a balance between security and flexibility.
  • Restricted: This level is more secure and includes policies that are recommended for sensitive applications. It restricts some default settings to harden the pods against potential vulnerabilities.
  • Privileged: This is the least restrictive level and allows for the most permissive configurations. It’s typically used for pods that need extensive privileges and is not recommended for most applications due to the potential security risks.

Each of these levels includes a set of policies and configurations that control aspects of pod security, such as:

  • Privilege escalation and permissions
  • Access to host resources and networking
  • Isolation and sandboxing of containers
  • Resource restrictions and quotas

The purpose of the Pod Security Standards is to make it easier for administrators and developers to apply consistent security practices across all applications in a Kubernetes environment. By adhering to these standards, organizations can help ensure that their containerized applications are deployed in a secure and compliant manner.

Controlling Pod Behavior

Controlling pod behavior involves restricting what pods can do and what resources they can access. This includes managing resource usage, limiting network access, and controlling the use of volumes and file systems.

Security at the Pod Level

Security at the pod level can be enhanced by:

  • Using trusted base images for containers.
  • Restricting root access within containers.
  • Implementing network policies to control the traffic flow between pods.

Future Trends and Conclusion

The Future of Kubernetes

Kubernetes is continuously evolving, with a strong focus on enhancing its security features. Future trends may include more robust automated security policies, enhanced encryption capabilities, and improved compliance and governance tools.

Final Thoughts and Next Steps for Beginners

For beginners, the journey into Kubernetes can start with understanding the basic concepts, and gradually moving towards more complex security practices. It’s essential to stay updated with the latest Kubernetes releases and security recommendations.

Reach Out to SUSE for Help

For additional support and guidance, reaching out to experts at SUSE can provide valuable insights and assistance in navigating the Kubernetes landscape and ensuring a secure and efficient deployment.

NeuVector Releases v 5.3.0: Enhancing Network Security and Automation

Tuesday, 30 January, 2024

We are pleased to announce the release and general availability of NeuVector version 5.3.0! This release adds significant functionality to our market-leading container network security protections, as well as support for GitOps security as code automation. It also expands the breadth of platform compatibility with Arm64 and public cloud marketplace support.

 

Enhanced Zero Trust Network Protections for Kubernetes

This release provides valuable insights into external connections from a Kubernetes cluster. Developers frequently require external connections for API services, external data sources, or even internet-based open source updates. These external connections can be to internal private networks or internet services, and it can be difficult for operations and security teams to know which should be allowed and which are suspicious. With the prevalence of embedded malware, backdoors, and crypto mining, it is critical for external connections from a cluster to be properly identified and secured. In 5.3.0, NeuVector utilizes its layer 7 (application) inspection of all traffic, including DNS resolutions for fully qualified domain names (FQDNs), into IP addresses to first learn externally referenced hostnames/URLs and report on external connections. With this knowledge, security, and operations teams can determine which connections should be allowed, which are suspicious, and which should be blocked. Allowed connections are then codified into the zero trust rules for external access. In addition, NeuVector can now be configured to allow ICMP traffic for monitoring or block ICMP-based attacks.

GitOps Automation for Security As Code

Kubernetes pipelines are highly dynamic and automated, and Kubernetes security policy must also be automated to support these pipelines. NeuVector 5.3.0 expands ‘Security as Code’ support by enabling the export of security policies (yaml-based manifests) to git repositories (GitHub) in the form of NeuVector custom resource definitions (CRDs). This extends an effort begun several years ago to enable all NeuVector security policies to be managed through CRDs. The use of a GitOps workflow for managing security manifests will continue to be expanded in the future through imports from git repositories as well.

Expanded Platform and Public Cloud Marketplaces

This release adds support for Arm64-based architectures running Linux containers and expands support for Amazon EKS, Microsoft Azure, and Google marketplaces. Working closely with the technical team at Arm, NeuVector engineers have successfully ported and qualified the Arm64 platform. As an open source security project, NeuVector enables teams to make significant contributions to the project. This brings full-lifecycle security to containers running on Arm, including bare metal and public clouds like Amazon EKS Graviton.

 

What’s Next?

To see all the enhancements and bug fixes, please see the NeuVector 5.3.0 Release Notes.

Redefining Cloud Excellence

Thursday, 4 January, 2024

Meet Christine Puccio: Breaking Cloud Barriers at SUSE

In the three months since Christine Puccio joined SUSE as Global VP of Cloud, a huge wave of energy has swept across the cloud team. It’s received extra visibility and cloud is now getting the excitement about the opportunity ahead.

I caught up with the transformation powerhouse behind this shift to ask her about herself and the opportunity she sees for cloud at SUSE.

 

Tell us a bit about yourself, Christine

“I’m a native Californian, living with my daughter in Oakland, on the opposite side of the Bay from San Francisco. 

During my career I have embraced different roles in sales, marketing – contracting, and partnering, across companies like Sun Microsystems, RedHat, and NGINIX, (which was acquired by F5 Networks) and JFrog. This work has allowed me to work with SAP, Microsoft, Google, AWS and other software companies which have given me immense experience in technology innovation and driven my appetite to become a leader within the IT and Cloud Sector.

What has been your most proud moment (in your career)?

I am very proud of the work I did at Red Hat leading the global SAP alliance. While at Sun, I led the SAP Americas market development strategy and learned the business. I translated that to Red Hat where the competition was steep as SUSE was dominating the SAP space! I then led the negotiation of RHEL for SAP HANA – bringing Red Hat’s business with SAP from thousands (USD) to multiple millions (USD) in less than a year. 

It’s interesting to now be at SUSE, realizing how large this business is and how happy I was just to get a small portion (at Red Hat).

 

What’s your approach to transformation?

“I find Geoffrey Moore’s book, Zone to Win: Organizing to Compete in an Age of Disruption very inspiring. He talks a lot about four quadrants: product, performance, incubation, and transformation. But I think the most important thing I took from him is that a company can’t transform if people aren’t behind it.

In a recent article about the value of Open Source software, it talks about the rise in terms of benefits vs. costs. From my perspective, it’s not just technical benefits but people benefits. A developer who contributes to open source learns about other developer perspectives, they learn new skills, tools and technologies. Also, contribution gives you confidence and an opportunity to build your reputation. The same can be said about tapping into a diverse workforce. It just makes good business sense. 

SUSE’s power is adding value to open source and we need to amplify that

“Transformation is an opportunity to build and scale – bringing ideas, people, and technologies together is the perfect recipe for innovation. That is exactly the open source model. I am a builder at heart. The cloud team are also builders. SUSE’s power is adding value to open source and we need to amplify that. That’s how, together, we’re going to build a world-class cloud business for SUSE.”

 

What’s the cloud opportunity for SUSE?

“The opportunity is for SUSE to stand up a new business: marketplaces. Customers are continuing to move workloads to the cloud and many ISVs have created a fast path to consume software through the marketplace. So, marketplaces are now the place to be. However, the marketplace is just one component. We are looking at an entire end-to-end Cloud GTM approach that has our new business with the marketplace – but also incorporates our first party offers with SLES and SLES for SAP. SUSE is one of the few in the industry in which customers can choose a variety of ways to purchase. 

  • Directly on their consumption contract or 
  • Through the marketplace. 

We are designing our Go-to-Market (GTM) to capitalize on both motions. 

This strategy allows SUSE to be a top technology partner, where customers have committed spend with the cloud providers. With over $300B in unspent committed funds, we are now positioned to help customers design a platform to support their workloads in the cloud.

“The question for SUSE is ‘How do we participate?’ We are continuing with our ongoing strategy for the need to make it easy for customers to purchase SUSE solutions that tap into their committed spend and spend it with us. We’re already a leading open source company – the opportunity is for us to continue to dominate the market, but increasingly through the marketplace channel.

“That’s not to say that GSI’s and channel partners are any less important. They’re massively important to our growth with the cloud. We are working on some exciting programs that include incentives and co-sell opportunities across all three clouds, providing SUSE the opportunity to co-sell with all its partners. It’s about growth, not substitution.

 

Our key objective is to architect our offerings to transact through marketplaces

“Hyperscalers provide the platform and the marketplace – our key objective is to architect our offerings to transact through each marketplace. We need to think of a listing as a product, with its own lifecycle, and get Product, Alliances, Sales, Partners, Marketing and Operations aligned.

What will be the key factors that customers are looking for from the cloud in 2024?

I’ve found that there are 4 key themes that customers are thinking about.

  1. Security: Customers are looking for cloud providers that offer robust security measures to protect their data and applications from cyber threats. NeuVector Prime and Rancher Prime solutions offer real-time compliance, visibility, and protection for critical applications and data during runtime.
  2. Portability: Customers want to be able to move their applications and data between different cloud providers or back to on-premise infrastructure without significant disruption. NeuVector Prime and Rancher Prime solutions provide information on optimizing cross-cloud workload portability and scale in a consistent way that satisfies KPIs and addresses compliance and security requirements.
  3. Scalability: Customers require cloud infrastructure that can scale up or down quickly to meet changing business needs. NeuVector Prime and Rancher Prime solutions provide container-based solutions that offer automatic deployments, portability, scalability, multi-cloud capabilities, and openness.
  4. Speed: Customers expect cloud infrastructure that can deliver fast and reliable performance for their applications and services. NeuVector Prime and Rancher Prime solutions provide Linux kernel updates to mitigate security risks and vulnerabilities, allowing customers to keep their SUSE product patched and up to date.

 

What’s the most important thing in a successful transformation?

“People”.

“First you need executive support. Cloud is CEO-level driven at SUSE. It has cross-functional engagement and workstreams with leads who make sure we meet the KPI’s from each stream. I have enjoyed helping to build this structure and cadence. The success of cloud at SUSE is because of the leaders and sponsors who have supported the strategy. Our GM and SVP of Global Ecosystems has been critical to drive the importance and a true support of the business. Her openness to change models and look at “the art of the possible” with our ecosystem partners has been a game changer. I know the SUSE leadership team has my back in what we’re doing and that is such a winning recipe.

“We’ve already restructured the cloud sales team to align closer to our sales and partner teams with a specific focus on co-sell with the providers. We’re also hiring a few marketplace blackbelts to handle the multi-million-dollar custom deals through marketplaces. It’s really exciting! 

I encourage the team to take risks. Ask the question, “what would have to be true to make this happen”. To scale and grow a business, it takes risk. I’ve fallen down many times, but I’ve always learned in the process. You can’t grow unless you take risks. I see the team coming along with our change with such confidence. I am in this with them and repeat over and over – Success will always be at the end.

I want people to find and harness their own power and amplify it

“I love helping people in their careers. I’m a mentor as much as a leader. I want people to find and harness their own superpower and amplify it. In fact, changes were made on the team to do just that. If we continue to do that, individually and as a company, we will undoubtedly ignite a spectacular new future for SUSE in the cloud. That’s what really excites me.”

 

Join the conversation 

Send us an email at cloud@suse.com with your thoughts and/or questions. And, watch this space for more on SUSE’s cloud transformation journey and hear more about how we’re Getting Loud About Cloud.

 

Follow Christine Puccio on LinkedIn.

 

What is Container Security?

Wednesday, 25 October, 2023

Introduction to Container Security

Container security is a critical aspect in the domain of modern software deployment and development. At its core, container security involves a comprehensive framework comprised of policies, processes, and technologies that are specifically designed to protect containerized applications and the infrastructure they run on. These security measures are implemented throughout the containers’ entire lifecycle, from creation to deployment and eventual termination.

Containers have revolutionized the software development world. Unlike traditional methods, containers offer a lightweight, standalone software package that includes everything an application requires to function: its code, runtime, system tools, libraries, and even settings. This comprehensive packaging ensures that applications can operate consistently and reliably across varied computing environments, from an individual developer’s machine to vast cloud-based infrastructures.

With the increasing popularity and adoption of containers in the tech industry, their significance in software deployment cannot be understated. Given that they encapsulate critical components of applications, ensuring their security is of utmost importance. A security breach in a container can jeopardize not just the individual application but can also pose threats to the broader IT ecosystem. This is due to the interconnected nature of modern applications, where a vulnerability in one can have cascading effects on others.

Therefore, container security doesn’t just protect the containers themselves but also aims to safeguard the application’s data, maintain the integrity of operations, and ensure that unauthorized intrusions are kept at bay. Implementing robust container security protocols ensures that software development processes can leverage the benefits of containers while minimizing potential risks, thus striking a balance between efficiency and safety in the ever-evolving landscape of software development.

Why is Container Security Needed?

The integration of containers into modern application development and deployment cannot be understated. However, their inherent attributes and operational dynamics present several unique security quandaries.

Rapid Scale in Container Technology: Containers, due to their inherent design and architecture, have the unique capability to be instantiated, modified, or terminated in an incredibly short span, often just a matter of seconds. While this rapid lifecycle facilitates flexibility and swift deployment in various environments, it simultaneously introduces significant challenges. One of the most prominent issues lies in the manual management, tracking, and security assurance of each individual container instance. Without proper oversight and mechanisms in place, it becomes increasingly difficult to maintain and ensure the safety and integrity of the rapidly changing container ecosystem.

Shared Resources: Containers operate in close proximity to each other and often share critical resources with their host and fellow containers. This interconnectedness becomes a potential security chink. For instance, if a single container becomes compromised, it might expose linked resources to vulnerabilities.

Complex Architectures: In today’s fast-paced software environment, the incorporation of microservices architecture with container technologies has emerged as a prevalent trend. The primary motivation behind this shift is the numerous advantages microservices offer, including impressive scalability and streamlined manageability. By breaking applications down into smaller, individual services, developers can achieve rapid deployments, seamless updates, and modular scalability, thereby making systems more responsive and adaptable.

Yet, these benefits come with a trade-off. The decomposition of monolithic applications into multiple microservices leads to a web of complex, intertwined networks. Each service can have its own dependencies, communication pathways, and potential vulnerabilities. This increased interconnectivity amplifies the overall system complexity, presenting challenges for administrators and security professionals alike. Overseeing such expansive networks becomes a daunting task, and ensuring their protection from potential threats or breaches becomes even more critical and challenging.

Benefits of Container Security

Reduced Attack Surface: Containers, when designed, implemented, and operated with best security practices in mind, have the capacity to offer a much-reduced attack surface. With meticulous security measures in place, potential vulnerabilities within these containers are significantly minimized. This careful approach to security not only ensures the protection of the container’s contents but also drastically diminishes the likelihood of falling victim to breaches or sophisticated cyber-attacks. In turn, businesses can operate with a greater sense of security and peace of mind.

Compliance and Regulatory Adherence: In a global ecosystem that’s rapidly evolving, industries across the board are moving towards standardization. As a result, regulatory requirements and compliance mandates are becoming increasingly stringent. Ensuring that container security is up to par is paramount. Proper security practices ensure that businesses not only adhere to these standards but also remain shielded from potential legal repercussions, costly penalties, and the detrimental impact of non-compliance on their reputation.

Increased Trust and Business Reputation: In today’s interconnected digital age, trust has emerged as a vital currency for businesses. With data breaches and cyber threats becoming more commonplace, customers and stakeholders are more vigilant than ever about whom they entrust with their data and business. A clear and demonstrable commitment to robust container security can foster trust and confidence among these groups. When businesses prioritize and invest in strong security measures, they don’t just ensure smoother business relationships; they also position themselves favorably in the market, bolstering the company’s overall reputation and standing amidst peers and competitors alike.

How Does Container Security Work?

Container security, by its very nature, is a nuanced and multi-dimensional discipline, ensuring the safety of both the physical host systems and the encapsulated applications. Spanning multiple layers, container security is intricately designed to address the diverse challenges posed by containerization.

Host System Protection: At the base layer is the host system, which serves as the physical or virtual environment where containers reside. Ensuring the host is secure means providing a strong foundational layer upon which containers operate. This includes patching host vulnerabilities, hardening the operating system, and regularly monitoring for threats. In essence, the security of the container is intrinsically tied to the health and security of its host.

Runtime Protection: Once the container is up and running, the runtime protection layer comes into play. This is crucial as containers often have short life spans but can be frequently instantiated. The runtime protection monitors these containers in real-time during their operation. It doesn’t just ensure that they function as intended but also vigilantly keeps an eye out for any deviations that might indicate suspicious or malicious activities. Immediate alerts and responses can be generated based on detected anomalies.

Image Scanning: An essential pre-emptive measure in container security is the image scanning process. Before a container is even deployed, the images on which they’re based are meticulously scanned for any vulnerabilities, both known and potential. This scanning ensures that only images free from vulnerabilities are used, ensuring that containers start their life cycle on a secure footing. Regular updates and patches are also essential to ensure continued security.

Network Segmentation: In a landscape where multiple containers often interact, the potential for threats to move laterally is a concern. Network segmentation acts as a strategic traffic controller, overseeing and strictly governing communications between different containers. By isolating containers or groups of containers, this layer effectively prevents malicious threats from hopping from one container to another, thereby containing potential breaches.

 

What are Kubernetes and Docker?

Kubernetes: Emerging as an open-source titan, Kubernetes has firmly established itself in the realm of container orchestration. Designed originally by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes has rapidly become the de facto standard for handling the multifaceted requirements of containerized applications. Its capabilities stretch beyond just deployment; it excels in dynamically scaling applications based on demand, seamlessly rolling out updates, and ensuring optimal utilization of underlying infrastructure resources. Given the pivotal role it plays in the modern cloud ecosystem, ensuring the security and integrity of Kubernetes configurations and deployments is paramount. When implemented correctly, Kubernetes can bolster an organization’s efficiency, agility, and resilience in application management.

Docker: Before the advent of Docker, working with containers was often considered a complex endeavor. Docker changed this narrative. This pioneering platform transformed and democratized the world of containers, making it accessible to a broader range of developers and organizations. At its core, Docker empowers developers to create, deploy, and run applications encapsulated within containers. These containers act as isolated environments, ensuring that the application behaves consistently, irrespective of the underlying infrastructure or platform on which it runs. Whether it’s a developer’s local machine, a testing environment, or a massive production cluster, Docker ensures the application’s behavior remains predictable and consistent. This level of consistency has enabled developers to streamline development processes, reduce “it works on my machine” issues, and accelerate the delivery of robust software solutions.

In summary, while Kubernetes and Docker serve distinct functions, their synergistic relationship has ushered in a new era in software development and deployment. Together, they provide a comprehensive solution for building, deploying, and managing containerized applications, ensuring scalability, consistency, and resilience in the ever-evolving digital landscape.

 

Container Security Best Practices

The exponential rise in the adoption of containerization underscores the importance of robust security practices to shield applications and data. Here’s a deep dive into some pivotal container security best practices:

Use Trusted Base Images: The foundation of any container is its base image. Starting your container journey with images from reputable, trustworthy repositories can drastically reduce potential vulnerabilities. It’s recommended to always validate the sources of these images, checking for authenticity and integrity, to ensure they haven’t been tampered with or compromised.

Limit User Privileges: A fundamental principle in security is the principle of least privilege. By running containers with only the minimum necessary privileges, the attack surface is significantly reduced. This practice ensures that even if a malicious actor gains access to a container, their ability to inflict damage or extract sensitive information remains limited.

Monitor and Log Activities: Continuous monitoring of container activities is the cornerstone of proactive security. By keeping an eagle-eyed vigil over operations, administrators can detect anomalies or suspicious patterns early. Comprehensive logging of these activities, paired with robust log analysis tools, provides a valuable audit trail. This not only aids in detecting potential security threats but also assists in troubleshooting and performance optimization.

Container technology has heralded a revolution in application deployment and management. Yet, as with any technological advancement, mistakes in its implementation can expose systems to threats. Let’s delve into some commonly overlooked container security pitfalls:

Ignoring Unneeded Dependencies: The allure of containers lies in their lightweight and modular nature. Ironically, one common oversight is bloating them with unnecessary tools, libraries, or dependencies. A streamlined container is inherently safer since each additional component increases the potential attack surface. By limiting a container to only what’s essential, one reduces the avenues through which it can be compromised. It’s always recommended to regularly audit and prune containers to ensure they remain lean and efficient.

Using Default Configurations: Out-of-the-box settings are often geared towards ease of setup rather than optimal security. Attackers are well aware of these default configurations and often specifically target them, hoping that administrators have overlooked this aspect. Avoid this pitfall by customizing and hardening container configurations. This not only makes the container more secure but also can enhance its performance and compatibility with specific use cases.

Not Scanning for Vulnerabilities: The dynamic nature of software means new vulnerabilities emerge regularly. A lack of regular and rigorous vulnerability scanning leaves containers exposed to these potential threats. Implementing an automated scanning process ensures that containers are consistently checked for known vulnerabilities, and appropriate patches or updates are applied in a timely manner.

Ignoring Network Policies: Containers often operate within interconnected networks, communicating with other containers, services, or external systems. Without proper network policies in place, there’s an increased risk of threats moving laterally, exploiting one vulnerable container to compromise others. Implementing and enforcing stringent network policies is essential. These policies govern container interactions, defining who can communicate with whom, and under what circumstances, thus adding a robust layer of protection.

 

How SUSE Can Help

SUSE offers a range of solutions and services to help with container security. Here are some ways SUSE can assist:

Container Security Enhancements: SUSE provides tools and technologies to enhance the security of containers. These include Linux capabilities, seccomp, SELinux, and AppArmor. These security mechanisms help protect containers from vulnerabilities and unauthorized access.

Securing Container Workloads in Kubernetes: SUSE offers solutions such as Kubewarden to secure container workloads within Kubernetes clusters. This includes using Pod Security Admission (PSA) to define security policies for pods and secure container images themselves. 

SUSE NeuVector: SUSE NeuVector is a container security platform designed specifically for cloud-native applications running in containers. It provides zero-trust container security, real-time inspection of container traffic, vulnerability scanning, and protection against attacks.

DevSecOps Strategy: SUSE emphasizes the importance of adopting a DevSecOps strategy, where software engineers understand the security implications of the software they maintain and managers prioritize software security. SUSE supports companies in implementing this strategy to ensure a high level of security in day-to-day applications.

By leveraging SUSE’s expertise and solutions, organizations can enhance the security of their container environments and protect their applications from vulnerabilities and attacks.

 

Business and operational security in the context of Artificial Intelligence

Tuesday, 17 October, 2023

This is a guest blog by Udo Würtz, Fujitsu Fellow, CDO and Business Development Director of the Fujitsu’s European Platform Business. Read more about Udo, including how to contact him, below.

 

Deploying AI systems in an organization requires significant investments in technology, talent, and training. There is a fear that the expected ROI (return on investment) will not materialize, especially if the deployment does not meet business needs.

This is where a reference architecture like the AI Test Drive comes into play. It allows companies to test the feasibility and return on investment of AI solutions in a controlled environment before committing to significant investments. AI Test Drive thus addresses not only technical risks, but also commercial risks, enabling companies to make informed decisions.

The field of data science is rapidly evolving, and many professionals are looking for a reliable platform to effectively evaluate AI applications. However, such architectures must support a range of cutting-edge technologies. So let’s examine each technology component and its importance in this context.

  1. Platform and Cluster Management with SUSE Rancher:

Kubernetes has become the gold standard for container orchestration. Rancher, a comprehensive Kubernetes management tool, supports the operations and scalability of AI models. It allows the management of Kubernetes clusters across multiple cloud environments, simplifying the roll-out and management of AI applications.

  1. Hyper-convergence with Harvester:

In contemporary AI environments, which are usually cloud native environments, the capacity for hyper-convergence—integrating computation, storage, and networking into one solution—is invaluable. Harvester offers this capability, leading to enhanced efficiency and scalability for AI applications.

  1. Computational Power through Intel:

Intel technologies, notably the Intel® Xeon® Scalable processors, are fine-tuned for AI applications. Additional features like the Intel® Deep Learning Boost accelerate deep learning tasks. In particular, the Gen 4 has separate AI accelerators on board, which makes this type of Processor significantly different from the previous ones and delivers incredible performance. In a project involving vehicle detection, the Gen 3 had an inference of 30 frames / s. This was a very good performance. Gen 4 of over 5000(!) frames/s, due to the accelerators inside the chip.

  1. Storage Solutions with NetApp:

Data is the core of AI. NetApp provides efficient storage solutions specially designed to store and process massive datasets, which is crucial for AI projects.

  1. Parallel Processing with NVIDIA:

The parallel processing capability that NVIDIA GPUs bring to the table is invaluable in AI applications where large datasets must be processed simultaneously. 

  1. Network Infrastructure by Juniper:

The backbone of every AI platform is its networking. Juniper delivers advanced network solutions ensuring efficient, bottleneck-free data traffic flow. This is vital in AI settings where there are demands for low latency and high bandwidth.

Now You Can Evaluate Your AI Projects Practically & Technically:

The Fujitsu AI Test Drive amalgamates tried-and-true technologies into a cohesive platform, granting data scientists the ability to evaluate their AI projects both pragmatically and technically. By accessing such deep technological resources, users can pinpoint the tools and infrastructure that best align with their unique AI challenges.

Share your idea and we share knowledge and resources.

What is your vision for a business model that fully exploits the possibilities of innovative IT concepts? Do you already have a vision that you are implementing concretely? Or do you still lack the necessary resources on the way from the idea to realization, for example technical expertise, budget and sufficient test capacities?

We’re pleased to introduce the Fujitsu Lighthouse Initiative, a special program, designed to foster prototyping and drive technological endeavors, ensuring businesses harness the full potential of emerging technologies.​ The initiative isn’t just about gaining support for your Digital Innovation and prototyping projects; it’s a pathway to joint project realization. Selected projects can benefit from a project support pool of €100,000, to be used tailored to these project’s unique requirements. Together, we will leverage Fujitsu’s resources, expertise, and vast ecosystem to turn visionary ideas into tangible outcomes.

Register today for the Fujitsu Lighthouse Initiative.

 

Related infographic

About the Author:

Udo Würtz is Chief Data Officer ( CDO of the Fujitsu European Platform Business. In his function he advises customers at C level (CIO, CTO, CEO, CDO, CFO) on strategies, technologies and new trends in the IT business. Before joining Fujitsu, he worked for 17 years as CIO for a large retail company and later for a Cloud Service Provider, where he was responsible for the implementation of secure and highly available IT architectures. Subsequently, he was appointed by the Federal Ministry of Economics and Technology as an expert for the Trusted Cloud Program of the Federal Government in Berlin. Udo Würtz is intensively involved in Fujitsu’s activities in the fields of artificial intelligence (AI), container technologies and the Internet of Things (IoT) and, as a Fujitsu Fellow, gives lectures and live demos on these topics. He also runs his own YouTube channel on the subject of AI.

Advancing Technology Innovation: Join SUSE + Intel® at SAP TechEd Bangalore, Nov 2-3

Monday, 9 October, 2023

The SAP TechEd conference in Bangalore will be here before you know it and, as always, SUSE will be there. This time we are joined by our co-sponsor and co-innovation partner Intel. Come to the booth and learn why SUSE and Intel are the preferred foundation by SAP customers. We will have experts on hand who can talk in detail about ways to improve the resilience and security of your SAP infrastructure or how you can leverage AI in your SAP environment.

SAP TechEd 2023, Nov 2-3, is the premier SAP tech conference for technologists, engineers, and developers. If you need a more detailed discussion with SUSEs and Intel’s technical experts in a private one-on-one setting, send an email to sapalliance@suse.com. Briefly state what you’d like to discuss, and we’ll make sure we have the right people available to help address your needs. There are a limited number of time slots, and meetings are reserved on a first-come, first-served basis so please book early.

Be sure to add these presentations to your agenda:

  • Cybersecurity Next Steps – Confidential Computing

November 3, 2023, 16:00 pm, location: L3

Data breaches cost companies millions of dollars every year. No customers want their workloads compromised. Customers need the highest levels of data privacy to innovate, build, and securely operate their applications, especially in public cloud deployments. Confidential computing is a new security approach to encrypting workloads while being processed. Join the Intel and SUSE session to learn about the importance and benefits of Confidential Computing in securing data. Let us show you how you can start your confidential computing journey.

  • Increase IT Resilience – Say Goodbye to Downtime

November 2, 2023, 14:30 pm, location L3

You rely on your mission-critical SAP systems like SAP S/4HANA to help you drive innovation for your business and your customers. What happens when these critical systems are down? Whether because of planned outages to fix security risks or unplanned interruption, it reduces productivity, revenues, and customer satisfaction, while potentially increasing costs. Join the Intel and SUSE session and learn how you can minimize your server downtime, maximize your service availability, and build a digital infrastructure that keeps SAP HANA running 24×7, 365 days a year.

Get a 20% discount on purchasing SUSE eLearning Subscription

Come to the SUSE booth and get a 20% discount on purchasing SUSE eLearning Subscription Silver and Gold from the SUSE Shop.

If you want to learn more about SUSE eLearning please look at

All courses in SUSE Linux Enterprise Server for SAP application Learning Path are available in the eLearning Subscription.

We are looking forward to meeting you in Bangalore.

 

Getting Started with Cluster Autoscaling in Kubernetes

Tuesday, 12 September, 2023

Autoscaling the resources and services in your Kubernetes cluster is essential if your system is going to meet variable workloads. You can’t rely on manual scaling to help the cluster handle unexpected load changes.

While cluster autoscaling certainly allows for faster and more efficient deployment, the practice also reduces resource waste and helps decrease overall costs. When you can scale up or down quickly, your applications can be optimized for different workloads, making them more reliable. And a reliable system is always cheaper in the long run.

This tutorial introduces you to Kubernetes’s Cluster Autoscaler. You’ll learn how it differs from other types of autoscaling in Kubernetes, as well as how to implement Cluster Autoscaler using Rancher.

The differences between different types of Kubernetes autoscaling

By monitoring utilization and reacting to changes, Kubernetes autoscaling helps ensure that your applications and services are always running at their best. You can accomplish autoscaling through the use of a Vertical Pod Autoscaler (VPA)Horizontal Pod Autoscaler (HPA) or Cluster Autoscaler (CA).

VPA is a Kubernetes resource responsible for managing individual pods’ resource requests. It’s used to automatically adjust the resource requests and limits of individual pods, such as CPU and memory, to optimize resource utilization. VPA helps organizations maintain the performance of individual applications by scaling up or down based on usage patterns.

HPA is a Kubernetes resource that automatically scales the number of replicas of a particular application or service. HPA monitors the usage of the application or service and will scale the number of replicas up or down based on the usage levels. This helps organizations maintain the performance of their applications and services without the need for manual intervention.

CA is a Kubernetes resource used to automatically scale the number of nodes in the cluster based on the usage levels. This helps organizations maintain the performance of the cluster and optimize resource utilization.

The main difference between VPA, HPA and CA is that VPA and HPA are responsible for managing the resource requests of individual pods and services, while CA is responsible for managing the overall resources of the cluster. VPA and HPA are used to scale up or down based on the usage patterns of individual applications or services, while CA is used to scale the number of nodes in the cluster to maintain the performance of the overall cluster.

Now that you understand how CA differs from VPA and HPA, you’re ready to begin implementing cluster autoscaling in Kubernetes.

Prerequisites

There are many ways to demonstrate how to implement CA. For instance, you could install Kubernetes on your local machine and set up everything manually using the kubectl command-line tool. Or you could set up a user with sufficient permissions on Amazon Web Services (AWS), Google Cloud Platform (GCP) or Azure to play with Kubernetes using your favorite managed cluster provider. Both options are valid; however, they involve a lot of configuration steps that can distract from the main topic: the Kubernetes Cluster Autoscaler.

An easier solution is one that allows the tutorial to focus on understanding the inner workings of CA and not on time-consuming platform configurations, which is what you’ll be learning about here. This solution involves only two requirements: a Linode account and Rancher.

For this tutorial, you’ll need a running Rancher Manager server. Rancher is perfect for demonstrating how CA works, as it allows you to deploy and manage Kubernetes clusters on any provider conveniently from its powerful UI. Moreover, you can deploy it using several providers, including these popular options:

If you are curious about a more advanced implementation, we suggest reading the Rancher documentation, which describes how to install Cluster Autoscaler on Rancher using Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling groups. However, please note that implementing CA is very similar on different platforms, as all solutions leverage Kubernetes Cluster API for their purposes. Something that will be addressed in more detail later.

What is Cluster API, and how does Kubernetes CA leverage it

Cluster API is an open source project for building and managing Kubernetes clusters. It provides a declarative API to define the desired state of Kubernetes clusters. In other words, Cluster API can be used to extend the Kubernetes API to manage clusters across various cloud providers, bare metal installations and virtual machines.

In comparison, Kubernetes CA leverages Cluster API to enable the automatic scaling of Kubernetes clusters in response to changing application demands. CA detects when the capacity of a cluster is insufficient to accommodate the current workload and then requests additional nodes from the cloud provider. CA then provisions the new nodes using Cluster API and adds them to the cluster. In this way, the CA ensures that the cluster has the capacity needed to serve its applications.

Because Rancher supports CA and RKE2, and K3s works with Cluster API, their combination offers the ideal solution for automated Kubernetes lifecycle management from a central dashboard. This is also true for any other cloud provider that offers support for Cluster API.

Link to the Cluster API blog

Implementing CA in Kubernetes

Now that you know what Cluster API and CA are, it’s time to get down to business. Your first task will be to deploy a new Kubernetes cluster using Rancher.

Deploying a new Kubernetes cluster using Rancher

Begin by navigating to your Rancher installation. Once logged in, click on the hamburger menu located at the top left and select Cluster Management:

Rancher's main dashboard

On the next screen, click on Drivers:

**Cluster Management | Drivers**

Rancher uses cluster drivers to create Kubernetes clusters in hosted cloud providers.

For Linode LKE, you need to activate the specific driver, which is simple. Just select the driver and press the Activate button. Once the driver is downloaded and installed, the status will change to Active, and you can click on Clusters in the side menu:

Activate LKE driver

With the cluster driver enabled, it’s time to create a new Kubernetes deployment by selecting Clusters | Create:

**Clusters | Create**

Then select Linode LKE from the list of hosted Kubernetes providers:

Create LKE cluster

Next, you’ll need to enter some basic information, including a name for the cluster and the personal access token used to authenticate with the Linode API. When you’ve finished, click Proceed to Cluster Configuration to continue:

**Add Cluster** screen

If the connection to the Linode API is successful, you’ll be directed to the next screen, where you will need to choose a region, Kubernetes version and, optionally, a tag for the new cluster. Once you’re ready, press Proceed to Node pool selection:

Cluster configuration

This is the final screen before creating the LKE cluster. In it, you decide how many node pools you want to create. While there are no limitations on the number of node pools you can create, the implementation of Cluster Autoscaler for Linode does impose two restrictions, which are listed here:

  1. Each LKE Node Pool must host a single node (called Linode).
  2. Each Linode must be of the same type (eg 2GB, 4GB and 6GB).

For this tutorial, you will use two node pools, one hosting 2GB RAM nodes and one hosting 4GB RAM nodes. Configuring node pools is easy; select the type from the drop-down list and the desired number of nodes, and then click the Add Node Pool button. Once your configuration looks like the following image, press Create:

Node pool selection

You’ll be taken back to the Clusters screen, where you should wait for the new cluster to be provisioned. Behind the scenes, Rancher is leveraging the Cluster API to configure the LKE cluster according to your requirements:

Cluster provisioning

Once the cluster status shows as active, you can review the new cluster details by clicking the Explore button on the right:

Explore new cluster

At this point, you’ve deployed an LKE cluster using Rancher. In the next section, you’ll learn how to implement CA on it.

Setting up CA

If you’re new to Kubernetes, implementing CA can seem complex. For instance, the Cluster Autoscaler on AWS documentation talks about how to set permissions using Identity and Access Management (IAM) policies, OpenID Connect (OIDC) Federated Authentication and AWS security credentials. Meanwhile, the Cluster Autoscaler on Azure documentation focuses on how to implement CA in Azure Kubernetes Service (AKS), Autoscale VMAS instances and Autoscale VMSS instances, for which you will also need to spend time setting up the correct credentials for your user.

The objective of this tutorial is to leave aside the specifics associated with the authentication and authorization mechanisms of each cloud provider and focus on what really matters: How to implement CA in Kubernetes. To this end, you should focus your attention on these three key points:

  1. CA introduces the concept of node groups, also called by some vendors autoscaling groups. You can think of these groups as the node pools managed by CA. This concept is important, as CA gives you the flexibility to set node groups that scale automatically according to your instructions while simultaneously excluding other node groups for manual scaling.
  2. CA adds or removes Kubernetes nodes following certain parameters that you configure. These parameters include the previously mentioned node groups, their minimum size, maximum size and more.
  3. CA runs as a Kubernetes deployment, in which secrets, services, namespaces, roles and role bindings are defined.

The supported versions of CA and Kubernetes may vary from one vendor to another. The way node groups are identified (using flags, labels, environmental variables, etc.) and the permissions needed for the deployment to run may also vary. However, at the end of the day, all implementations revolve around the principles listed previously: auto-scaling node groups, CA configuration parameters and CA deployment.

With that said, let’s get back to business. After pressing the Explore button, you should be directed to the Cluster Dashboard. For now, you’re only interested in looking at the nodes and the cluster’s capacity.

The next steps consist of defining node groups and carrying out the corresponding CA deployment. Start with the simplest and follow some best practices to create a namespace to deploy the components that make CA. To do this, go to Projects/Namespaces:

Create a new namespace

On the next screen, you can manage Rancher Projects and namespaces. Under Projects: System, click Create Namespace to create a new namespace part of the System project:

**Cluster Dashboard | Namespaces**

Give the namespace a name and select Create. Once the namespace is created, click on the icon shown here (ie import YAML):

Import YAML

One of the many advantages of Rancher is that it allows you to perform countless tasks from the UI. One such task is to import local YAML files or create them on the fly and deploy them to your Kubernetes cluster.

To take advantage of this useful feature, copy the following code. Remember to replace <PERSONAL_ACCESS_TOKEN> with the Linode token that you created for the tutorial:

---
apiVersion: v1
kind: Secret
metadata:
  name: cluster-autoscaler-cloud-config
  namespace: autoscaler
type: Opaque
stringData:
  cloud-config: |-
    [global]
    linode-token=<PERSONAL_ACCESS_TOKEN>
    lke-cluster-id=88612
    defaut-min-size-per-linode-type=1
    defaut-max-size-per-linode-type=5
    do-not-import-pool-id=88541

    [nodegroup "g6-standard-1"]
    min-size=1
    max-size=4

    [nodegroup "g6-standard-2"]
    min-size=1
    max-size=2

Next, select the namespace you just created, paste the code in Rancher and select Import:

Paste YAML

A pop-up window will appear, confirming that the resource has been created. Press Close to continue:

Confirmation

The secret you just created is how Linode implements the node group configuration that CA will use. This configuration defines several parameters, including the following:

  • linode-token: This is the same personal access token that you used to register LKE in Rancher.
  • lke-cluster-id: This is the unique identifier of the LKE cluster that you created with Rancher. You can get this value from the Linode console or by running the command curl -H "Authorization: Bearer $TOKEN" https://api.linode.com/v4/lke/clusters, where STOKEN is your Linode personal access token. In the output, the first field, id, is the identifier of the cluster.
  • defaut-min-size-per-linode-type: This is a global parameter that defines the minimum number of nodes in each node group.
  • defaut-max-size-per-linode-type: This is also a global parameter that sets a limit to the number of nodes that Cluster Autoscaler can add to each node group.
  • do-not-import-pool-id: On Linode, each node pool has a unique ID. This parameter is used to exclude specific node pools so that CA does not scale them.
  • nodegroup (min-size and max-size): This parameter sets the minimum and maximum limits for each node group. The CA for Linode implementation forces each node group to use the same node type. To get a list of available node types, you can run the command curl https://api.linode.com/v4/linode/types.

This tutorial defines two node groups, one using g6-standard-1 linodes (2GB nodes) and one using g6-standard-2 linodes (4GB nodes). For the first group, CA can increase the number of nodes up to a maximum of four, while for the second group, CA can only increase the number of nodes to two.

With the node group configuration ready, you can deploy CA to the respective namespace using Rancher. Paste the following code into Rancher (click on the import YAML icon as before):

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  name: cluster-autoscaler
  namespace: autoscaler
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources:
      - "namespaces"
      - "pods"
      - "services"
      - "replicationcontrollers"
      - "persistentvolumeclaims"
      - "persistentvolumes"
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: autoscaler

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: autoscaler
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '8085'
    spec:
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler-amd64:v1.26.1
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=2
            - --cloud-provider=linode
            - --cloud-config=/config/cloud-config
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
            - name: cloud-config
              mountPath: /config
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"
        - name: cloud-config
          secret:
            secretName: cluster-autoscaler-cloud-config

In this code, you’re defining some labels; the namespace where you will deploy the CA; and the respective ClusterRole, Role, ClusterRoleBinding, RoleBinding, ServiceAccount and Cluster Autoscaler.

The difference between cloud providers is near the end of the file, at command. Several flags are specified here. The most relevant include the following:

  • Cluster Autoscaler version v.
  • cloud-provider; in this case, Linode.
  • cloud-config, which points to a file that uses the secret you just created in the previous step.

Again, a cloud provider that uses a minimum number of flags is intentionally chosen. For a complete list of available flags and options, read the Cloud Autoscaler FAQ.

Once you apply the deployment, a pop-up window will appear, listing the resources created:

CA deployment

You’ve just implemented CA on Kubernetes, and now, it’s time to test it.

CA in action

To check to see if CA works as expected, deploy the following dummy workload in the default namespace using Rancher:

Sample workload

Here’s a review of the code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox-workload
  labels:
    app: busybox
spec:
  replicas: 600
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        imagePullPolicy: IfNotPresent
        
        command: ['sh', '-c', 'echo Demo Workload ; sleep 600']

As you can see, it’s a simple workload that generates 600 busybox replicas.

If you navigate to the Cluster Dashboard, you’ll notice that the initial capacity of the LKE cluster is 220 pods. This means CA should kick in and add nodes to cope with this demand:

**Cluster Dashboard**

If you now click on Nodes (side menu), you will see how the node-creation process unfolds:

Nodes

New nodes

If you wait a couple of minutes and go back to the Cluster Dashboard, you’ll notice that CA did its job because, now, the cluster is serving all 600 replicas:

Cluster at capacity

This proves that scaling up works. But you also need to test to see scaling down. Go to Workload (side menu) and click on the hamburger menu corresponding to busybox-workload. From the drop-down list, select Delete:

Deleting workload

A pop-up window will appear; confirm that you want to delete the deployment to continue:

Deleting workload pop-up

By deleting the deployment, the expected result is that CA starts removing nodes. Check this by going back to Nodes:

Scaling down

Keep in mind that by default, CA will start removing nodes after 10 minutes. Meanwhile, you will see taints on the Nodes screen indicating the nodes that are candidates for deletion. For more information about this behavior and how to modify it, read “Does CA respect GracefulTermination in scale-down?” in the Cluster Autoscaler FAQ.

After 10 minutes have elapsed, the LKE cluster will return to its original state with one 2GB node and one 4GB node:

Downscaling completed

Optionally, you can confirm the status of the cluster by returning to the Cluster Dashboard:

**Cluster Dashboard**

And now you have verified that Cluster Autoscaler can scale up and down nodes as required.

CA, Rancher and managed Kubernetes services

At this point, the power of Cluster Autoscaler is clear. It lets you automatically adjust the number of nodes in your cluster based on demand, minimizing the need for manual intervention.

Since Rancher fully supports the Kubernetes Cluster Autoscaler API, you can leverage this feature on major service providers like AKS, Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS). Let’s look at one more example to illustrate this point.

Create a new workload like the one shown here:

New workload

It’s the same code used previously, only in this case, with 1,000 busybox replicas instead of 600. After a few minutes, the cluster capacity will be exceeded. This is because the configuration you set specifies a maximum of four 2GB nodes (first node group) and two 4GB nodes (second node group); that is, six nodes in total:

**Cluster Dashboard**

Head over to the Linode Dashboard and manually add a new node pool:

**Linode Dashboard**

Add new node

The new node will be displayed along with the rest on Rancher’s Nodes screen:

**Nodes**

Better yet, since the new node has the same capacity as the first node group (2GB), it will be deleted by CA once the workload is reduced.

In other words, regardless of the underlying infrastructure, Rancher makes use of CA to know if nodes are created or destroyed dynamically due to load.

Overall, Rancher’s ability to support Cluster Autoscaler out of the box is good news; it reaffirms Rancher as the ideal Kubernetes multi-cluster management tool regardless of which cloud provider your organization uses. Add to that Rancher’s seamless integration with other tools and technologies like Longhorn and Harvester, and the result will be a convenient centralized dashboard to manage your entire hyper-converged infrastructure.

Conclusion

This tutorial introduced you to Kubernetes Cluster Autoscaler and how it differs from other types of autoscaling, such as Vertical Pod Autoscaler (VPA) and Horizontal Pod Autoscaler (HPA). In addition, you learned how to implement CA on Kubernetes and how it can scale up and down your cluster size.

Finally, you also got a brief glimpse of Rancher’s potential to manage Kubernetes clusters from the convenience of its intuitive UI. Rancher is part of the rich ecosystem of SUSE, the leading open Kubernetes management platform. To learn more about other solutions developed by SUSE, such as Edge 2.0 or NeuVector, visit their website.

What is Linux?

Monday, 4 September, 2023

Join us in this review of ‘What is Linux‘, tracing its evolution, the significance of open source, and SUSE’s role in this journey. From humble origins to future aspirations, we spotlight the challenges and milestones that define Linux’s legacy, rooted firmly in the ethos of open-source collaboration.

Table of contents:

Introduction to Linux

Understanding Open Source

Linux Distributions

Linux internals

Linux in the Enterprise

Future Trends and Developments

SUSE, Linux and the Open-Source movement

Conclusion


 

Introduction to Linux

Linux is an open-source kernel, similar to Unix, that forms the base for various operating system distributions. While the term “Linux” is commonly used to refer both to the kernel and the entire operating system built around it, a more precise term is “GNU/Linux”. This name highlights the combination of the Linux kernel with the extensive tooling provided by the GNU Project, turning something that was just a kernel into a full-fledged operating system.

Linux stands as a testament to the power of community collaboration. It has significantly shaped the software landscape through the combined efforts of tens of thousands of developers, leading to a broad collection of software. For those interested in a detailed history, we recommend this Wikipedia entry.

Given the recent turbulence in the Linux landscape, it makes sense to take a step back and look at what is Linux: its beginnings, its core structure, and its main milestones.

Going over its journey and key achievements will give us a clearer idea of how to better deal with the challenges coming ahead, and the potential developments that could help shape it for the next 30 years.

Understanding Open Source

Beyond its technical excellence, one of the key achievements of the GNU/Linux project has been the widespread adoption of the open-source development model, where developers share code for others to use, rebuild, and redistribute.

The legal foundation for this approach is primarily provided by the GNU Public License and other OSI-compliant licenses. These licenses have nurtured a broad open ecosystem and facilitated the growth of a plethora of software solutions, fostering a vibrant and innovative ecosystem.

It’s vital to remember that a genuine commitment to open source is a core reason for the success of GNU/Linux compared to other projects. It has even surpassed its closed-source counterparts in success. This is a testament to countless individual contributors and companies. And it’s a legacy that we should safeguard, no matter what challenges lie ahead.

Companies built on open source should always remember their roots. They’ve stood on the shoulders of giants, so recent events, like HashiCorp’s sudden license change or Red Hat’s moves to severely limit access to their distribution source code, endanger the true spirit of open source.

Linux Distributions

The initial complexity of configuring and compiling a Linux kernel and adding on top all the necessary GNU exiting tooling to build a running system (partitioning, file systems, command interpreters, GUI, …) led to the birth of the so called Linux Distributions.

A Linux Distribution is a way of packaging all the required software, together with an installer and all the necessary life-cycle managing tooling to be able to deploy, configure and keep updated over time a GNU/Linux environment.

The first really comprehensive distribution is considered to be SLS with the first distribution as we know them now being Slackware published in 1993. Founded in that very same year, SUSE was the first company to introduce an enterprise Linux distribution back in 1995.

There’s a very interesting timeline covering the origins and evolution of all linux distribtions available in Wikipedia

Linux internals

Linux Kernel

The Linux kernel is the central component of the Linux operating system, bridging software applications with the computer’s hardware. When a program or command is executed, it’s the kernel’s duty to interpret this request for the hardware. Its primary functions include:

  • Interfacing with hardware through modules and device drivers.
  • Managing resources like memory, CPU, processes, networking, and filesystems.
  • Serving as a conduit for applications and facilitating communications through system libraries, user space libraries or container engines.
  • Providing support for virtualization through hypervisors and virtual drivers
  • Overseeing foundational security layers of the OS.

By 2023, the Linux kernel is based on more than 30 million lines of code, distinguishing it as the largest open-source project in history and with the broadest collaboration base.

Command-Line Interface (CLI)

Echoing Unix’s design, from which Linux draws inspiration, the primary interaction mode with the OS is through the Command-Line Interface. Of the various CLIs available, BASH is the most widely adopted.

Graphical User Interface (GUI)

For those preferring visual interaction, Linux offers diverse GUIs. Historically rooted in the X-Windows system, there’s a noticeable shift towards modern platforms like Wayland. On top of these foundational systems, environments like GNOME, KDE, or XFCE serve as comprehensive desktop interfaces. They provide users with organized workspaces, application launching capabilities, window management, and customization options, all while integrating seamlessly with the core Linux kernel.

Linux Applications and Software Ecosystem

Understanding an operating system involves not only grasping its core mechanics but also the myriad applications it supports. For GNU/Linux, an intrinsic part of its identity lies in the vast array of software that’s been either natively developed for it or ported over. This wealth of software stands testament to the versatility and adaptability of Linux as an operating system platform.

  • Diverse Software Availability: Linux boasts a plethora of applications catering to almost every imaginable need, from office suites and graphics design tools to web servers and scientific computing utilities.
  • Package Managers and Repositories: One of the distinctive features of Linux is its package management systems. Tools like apt (used by Debian and Ubuntu), dnf (used by Red Hat-based systems), zypper (for SUSE/openSUSE), and more recently, universal packaging systems like flatpak, enable users to easily install, update, and manage software in a confined model that simplifies portability across distributions. These package managers pull software from repositories, which are vast online libraries of pre-compiled software and dependencies.
  • Emergence of Proprietary Software: While open-source software is the cornerstone of the Linux ecosystem, proprietary software companies have also recognized its value. They understand the importance of providing compatibility and packages for Linux platforms, further expanding the user base.

Linux in the Enterprise

Originally started as a hobby and a collections of research projects and tools, the potential of GNU/Linux as a platform for enterprise workloads rapidly became apparent. The closed nature of Unix, coupled with the fragmentation among Unix-based solutions back in the day, opened doors for Linux. This was particularly prominent as Linux exhibited its compatibility with widely adopted tools, such as GNU’s GCC, bash or the X-Windows system. Moreover, the dot-com bubble further spotlighted Linux’s prowess, with a surge in Linux-based services driving internet businesses that started to transform the IT landscape and set the roots for the Linux dominance in the server space that we can see today.

And how did it make its way from a hobbyist’s playground to a powerhouse in the enterprise world?

  • Open-Source Advantage: The open-source model became an invaluable asset in the corporate realm. As Linux showcased, the more developers and specialists that could access, review, and enhance the code, the higher the resultant software quality. This open-review mechanism ensured rapid identification and rectification of security concerns and software bugs.
  • Emergence of Enterprise Vendors: Enterprise solutions providers, notably Red Hat and SUSE, went beyond mere software distribution. These vendors began offering comprehensive support packages, ensuring businesses received consistent, reliable assistance. These packages, underpinned by enterprise-grade Service Level Agreements (SLAs), encompassed a wide range of offerings – from hardware and software certifications to implementation of security standards and legal assurances concerning software use.

Today, Linux reigns in the enterprise ecosystem. It is not only the go-to platform for a vast majority of new projects but also the backbone for the lion’s share of cloud-based services. This widespread adoption is a testament to Linux’s reliability, scalability, and adaptability to diverse business needs.

Despite having celebrated its 30th anniversary, Linux’s journey of expansion and adoption shows no signs of deceleration:

  • Containerization Surge: Modern software deployment has been revolutionized by containerization, with Linux playing a pivotal role. Containers package software with its required dependencies, ensuring consistent behavior across diverse environments. Linux underpins this movement, providing the foundation for technologies like Docker and Kubernetes.
  • Cloud Services Boom: The phenomenal growth of cloud services, powered by giants like AWS, Azure, and Google Cloud, has further solidified Linux’s dominance. This platform’s adaptability, security, and performance make it the choice foundation for these expansive digital infrastructures.
  • AI and Supercomputing: Linux stands at the forefront of cutting-edge technologies. Every significant AI initiative today relies on Linux. Furthermore, the top 500 supercomputers globally, including those currently under construction, are Linux-powered, showcasing its unmatched capabilities in high-performance computing.
  • IoT and Edge Computing: The proliferation of Internet of Things (IoT) devices and the growth of edge computing highlight another avenue where Linux shines. Its lightweight nature, modularity, and security features make it the preferred OS for these devices.

However, as the proverbial horizon brightens, challenges loom. While Linux has technically outpaced competitors and cemented itself as the de-facto standard for many new products and technologies, preserving its essence is crucial. The ethos of Linux and open-source, characterized by community, transparency, and collaboration, must be safeguarded. Initiatives like the Linux Foundation’s CNCF, which offers a blueprint for effective open source software development and governance far beyond just Linux, or the Open Enterprise Linux Association (OpenELA), are dedicated to keeping that spirit alive.

SUSE, Linux and the Open-Source movement

Introduction to SUSE

Originating as a German software company, SUSE has a long-standing history with Linux. It’s not only one of the earliest Linux distributions around but also one of the most preeminent advocates of the open-source philosophy.

Features and Benefits

SUSE Linux Enterprise Server (SLES) stands out for its enterprise-grade support, extensive HW and SW certifications database, robustness, and commitment to security.

SLES can be used on desktops, servers, HPC, in the cloud, or on IoT/Edge devices. It works with many architectures like AMD64/Intel 64 (x86-64), POWER (ppc64le), IBM Z (s390x), and ARM 64-Bit (AArch64).

SUSE’s Position in the Enterprise World

In the enterprise world, SLES is recognized as a reliable, secure, and innovative Linux distribution. It’s at the core of many demanding environments and powers business-critical systems, including those for SAP and the world’s largest supercomputers.

SLES isn’t just a standalone product; it’s part of a broader enterprise solutions portfolio. This includes, among others, SUSE Manager for scalable Linux systems management, Rancher Prime as a Kubernetes management platform, and NeuVector for enterprise-level Zero-Trust security for cloud-native applications.

The Open-Open Movement

Beyond its product offerings, SUSE’s commitment to the “open-open” philosophy sets it apart from other players. It embraces not only open-source but also open communities and open interoperability. This ensures that SUSE’s solutions promote flexibility and freedom while remaining true to the principles of the open-source movement.

Evidence of this commitment is visible across our entire portfolio. For instance, SUSE Manager has the capability to manage and support up to 12 different Linux distributions. Similarly, Rancher Primer doesn’t only run on SLES; it’s also compatible with openSUSE Leap, RHEL, Oracle Linux, Ubuntu, and Microsoft Windows. Additionally, it’s interoperable with major managed Kubernetes providers and public cloud vendors such as GCP, Azure, AWS, Linode, DigitalOcean, and many more. This commitment extends beyond our product lineup. SUSE also financially supports and donates software to organizations like the CNCF, as seen with K3s, and leads initiatives like the Open Enterprise Linux Association.

These initiatives highlight SUSE’s commitment to delivering solutions that promote genuine openness and user choice, while avoiding the pitfalls of single-vendor ecosystems that claim to be “open-source” yet offer non interoperable software stacks or restrict access to source code.

Conclusion

Over the past 30 years, this community effort has consolidated, transforming the way software is built, licensed, and distributed. Linux, now ubiquitous, continues to grow steadily, serving as the foundation for the latest IT solutions and technologies.

Now it’s time to transform how Linux distributions are built and delivered to achieve even higher levels of speed and flexibility. Initiatives like SUSE’s ALP Project aim to shape how Linux distributions will be built in the future, allowing for more use cases and scenarios, and a more flexible foundation to integrate the Linux kernel, along with the tooling and applications.

Want to join the open-open revolution? SUSE is growing and always looking for talent. Check all the open positions on our Jobs Website.