Advanced Monitoring and Observability​ Tips for Kubernetes Deployments

Monday, 28 August, 2023

Cloud deployments and containerization let you provision infrastructure as needed, meaning your applications can grow in scope and complexity. The results can be impressive, but the ability to expand quickly and easily makes it harder to keep track of your system as it develops.

In this type of Kubernetes deployment, it’s essential to track your containers to understand what they’re doing. You need to not only monitor your system but also ensure your monitoring delivers meaningful observability. The numbers you track need to give you actionable insights into your applications.

In this article, you’ll learn why monitoring and observability matter and how you can best take advantage of them. That way, you can get all the information you need to maximize the performance of your deployments.

Why you need monitoring and observability in Kubernetes

Monitoring and observability are often confused but worth clarifying for the purposes of this discussion. Monitoring is the means by which you gain information about what your system is doing.

Observability is a more holistic term, indicating the overall capacity to view and understand what is happening within your systems. Logs, metrics and traces are core elements. Essentially, observability is the goal, and monitoring is the means.

Observability can include monitoring as well as logging, tracing, continuous integration and even chaos engineering. Focusing on each facet gets you as close as possible to full coverage. Correcting that can improve your observability if you’ve overlooked one of these areas.

In addition, using black boxes, such as third-party services, can limit observability by making monitoring harder. Increasing complexity can also add problems. Your metrics may not be consistent or relevant if collected from different services or regions.

You need to work to ensure the metrics you collect are taken in context and can be used to provide meaningful insights into where your systems are succeeding and failing.

At a higher level, there are several uses for monitoring and observability. Performance monitoring tells you whether your apps are delivering quickly and what resources they’re consuming.

Issue tracking is also important. Observability can be focused on specific tasks, letting you see how well they’re doing. This can be especially relevant when delivering a new feature or hunting a bug.

Improving your existing applications is also vital. Examining your metrics and looking for areas you can improve will help you stay competitive and minimize your costs. It can also prevent downtime if you identify and fix issues before they lead to performance drops or outages.

Best practices and tips for monitoring and observability in Kubernetes

With distributed applications, collecting data from all your various nodes and containers is more involved than with a standard server-based application. Your tools need to handle the additional complexity.

The following tips will help you build a system that turns information into the elusive observability that you need. All that data needs to be tracked, stored and consolidated. After that, you can use it to gain the insights you need to make better decisions for the future of your application.

Avoid vendor lock-in

The major Kubernetes management services, including Amazon Elastic Kubernetes Service (EKS)Azure Kubernetes Service (AKS) and Google Kubernetes Engine (GKE), provide their own monitoring tools. While these tools include useful features, you need to beware of becoming overdependent on any that belong to a particular platform, which can lead to vendor lock-in. Ideally, you should be able to change technologies and keep the majority of your metric-gathering system.

Rancher, a complete software stack, lets you consolidate information from other platforms that can help solve issues arising when companies use different technologies without integrating them seamlessly. It lets you capture data from a wealth of tools and pipe your logs and data to external management platforms, such as Grafana and Prometheus, meaning your monitoring isn’t tightly coupled to any other part of your infrastructure. This gives you the flexibility to swap parts of your system in and out without too much expense. With platform-agnostic monitoring tools, you can replace other parts of your system more easily.

Pick the right metrics

Collecting metrics sounds straightforward, but it requires careful implementation. Which metrics do you choose? In a Kubernetes deployment, you need to ensure all layers of your system are monitored. That includes the application, the control plane components and everything in between.

CPU and memory usage are important but can be tricky to use across complex deployments. Other metrics, such as API response, request and error rates, along with latency, can be easier to track and give a more accurate picture of how your apps are performing. High disk utilization is a key indicator of problems with your system and should always be monitored.

At the cluster level, you should track node availability and how many running pods you have and make sure you aren’t in danger of running out of nodes. Nodes can sometimes fail, leaving you short.

Within individual pods, as well as resource utilization, you should check application-specific metrics, such as active users or parts of your app that are in use. You also need to track the metrics Kubernetes provides to verify pod health and availability.

Centralize your logging

Diagram showing multiple Kubernetes clusters piping data to Rancher, which sends it to a centralized logging store, courtesy of James Konik

Kubernetes pods keep their own logs, but having logs in different places is hard to keep track of. In addition, if a pod crashes, you can lose them. To prevent the loss, make sure any logs or metrics you require for observability are stored in an independent, central repository.

Rancher can help with this by giving you a central management point for your containers. With logs in one place, you can view the data you need together. You can also make sure it is backed up if necessary.

In addition to piping logs from different clusters to the same place, Rancher can also help you centralize authorization and give you coordinated role-based access control (RBAC).

Transferring large volumes of data will have a performance impact, so you need to balance your requirements with cost. Critical information should be logged immediately, but other data can be transferred on a regular basis, perhaps using a queued operation or as a scheduled management task.

Enforce data correlation

Once you have feature-rich tools in place and, therefore, an impressive range of metrics to monitor and elaborate methods for viewing them, it’s easy to lose focus on the reason you’re collecting the data.

Ultimately, your goal is to improve the user experience. To do that, you need to make sure the metrics you collect give you an accurate, detailed picture of what the user is experiencing and correctly identify any problems they may be having.

Lean toward this in the metrics you pick and in those you prioritize. For example, you might want to track how many people who use your app are actually completing actions on it, such as sales or logins.

You can track these by monitoring task success rates as well as how long actions take to complete. If you see a drop in activity on a particular node, that can indicate a technical problem that your other metrics may not pick up.

You also need to think about your alerting systems and pick alerts that spot performance drops, preferably detecting issues before your customers.

With Kubernetes operating in a highly dynamic way, metrics in different pods may not directly correspond to one another. You need to contextualize different results and develop an understanding of how performance metrics correspond to the user’s experience and business outcomes.

Artificial intelligence (AI) driven observability tools can help with that, tracking millions of data points and determining whether changes are caused by the dynamic fluctuations that happen in massive, scaling deployments or whether they represent issues that need to be addressed.

If you understand the implications of your metrics and what they mean for users, then you’re best suited to optimize your approach.

Favor scalable observability solutions

As your user base grows, you need to deal with scaling issues. Traffic spikes, resource usage and latency all need to be kept under control. Kubernetes can handle some of that for you, but you need to make sure your monitoring systems are scalable as well.

Implementing observability is especially complex in Kubernetes because Kubernetes itself is complicated, especially in multi-cloud deployments. The complexity has been likened to an iceberg.

It gets more difficult when you have to consider problems that arise when you have multiple servers duplicating functionality around the world. You need to ensure high availability and make your database available everywhere. As your deployment scales up, so do these problems.

Rancher’s observability tools allow you to deploy new clusters and monitor them along with your existing clusters from the same location. You don’t need to work to keep up as you deploy more widely. That allows you to focus on what your metrics are telling you and lets you spend your time adding more value to your product.

Conclusion

Kubernetes enables complex deployments, but that means monitoring and observability aren’t as straightforward as they would otherwise be. You need to take special care to ensure your solutions give you an accurate picture of what your software is doing.

Taking care to pick the right metrics makes your monitoring more helpful. Avoiding vendor lock-in gives you the agility to change your setup as needed. Centralizing your metrics brings efficiency and helps you make critical big-picture decisions.

Enforcing data correlation helps keep your results relevant, and thinking about scalability ahead of time stops your system from breaking down when things change.

Rancher can help and makes managing Kubernetes clusters easier. It provides a vast range of Kubernetes monitoring and observability features, ensuring you know what’s going on throughout your deployments. Check it out and learn how it can help you grow. You can also take advantage of free, community training for Kubernetes & Rancher at the Rancher Academy.

Check it out: documentation.suse.com featuring new search!

Thursday, 20 July, 2023

The content of the following article has been contributed by Gayathri Gandaboyina, Web Developer at the SUSE documentation team.

 

 

The team behind documentation.suse.com have unveiled an exciting new search tool powered by Google’s Programmable Search Engine. You can access it directly from our documentation landing page. This enhanced functionality empowers you to not just search the documentation content, but filter search results according to the various documentation types we offer.

 

Moreover, you can narrow down your results by selecting specific product versions, categories, or file formats.  And you can even sort results by relevance or date. But let’s dig a little deeper and explore the benefits of the search options and how they boost your user experience on documentation.suse.com.

Easy filtering by documentation type

The recently implemented search functionality brings increased flexibility because as mentioned, it allows you to filter search results based on your preferred documentation type. These are currently: Product Documentation, SUSE Best Practices (SBP), Technical Reference Documents (TRD), and Smart Docs.

 

To give you some examples: whether you require comprehensive insights into how to deploy any SUSE Linux Enterprise-based product or want to explore several SAP-specific best practices, delve into our new smart documentation articles, or access an in-depth reference configuration with a technology partner, the search tool presents a refined set of results tailored to your specific needs. This lets you quickly locate and access the information that is most relevant for you. In consequence, it will help you saving valuable time and effort.

Refined search results by product and version

We are well aware that SUSE offers a broad range of products with many different versions. That’s why the new search functionality takes usability one step further: It enables you to filter results by a product and its specific version. The granular filtering ensures that you can easily locate documentation pertinent to your desired SUSE product and its corresponding version.

 

Whether it’s the latest release or a previous iteration, you can access the most up-to-date and version-specific information with ease.

Category filters for precise results

Recognizing the importance of proper categorization, the search on documentation.suse.com introduces category filters under Smart Docs, TRD, and SBP. These filters allow you to further refine your search results by technical topics within the specific documentation types. Whether you are looking for topics such as containerization, systems management, tuning and performance, deployment, or SAP applications, the category filters help you navigate through the vast repository of information.

 

Our intention behind this should be obvious: as we maintain roughly 55.000 pages of documentation (for supported products and English only!), we strive to present you the most precise results for your search.

Flexible file format selection

We understand that you may have specific preferences for accessing documentation in different formats. Thus, the search on documentation.suse.com supports filtering by file format. You can refine your search results based on file types such as HTML, single-HTML, and PDF. This capability empowers you to access documentation in the format that best suits your requirements. The goal is to provide you with a seamless and customized user experience.

 

Improved UX

Talking about user experience, with the introduction of the new search options, we have prioritized user-centricity and usability on documentation.suse.com. The ability to filter search results by predefined and useful parameters allows you to quickly locate the precise information you seek. Getting a focused list of results, you can bypass unnecessary content and access the most relevant documents only. Such enhanced user experience usually translates into improved decision-making and increased productivity.

Check it out and share your feedback

The new search functionality on our documentation Web site powered by Google’s Programmable Search Engine marks a significant milestone in providing you with a much better documentation experience. By allowing you to filter search results based on documentation type, product, version, topic or category, and file format, we want to arm you with a powerful tool for efficient information retrieval. And we hope you will rely on documentation.suse.com as your trusted resource for all things SUSE documentation. As always, we would be grateful for feedback – so don’t hesitate to send your comments to doc-team@suse.com.

By the way: providing a search functionality on documentation.suse.com was one of the main requests we’ve received via our yearly documentation survey. You see, we take your feedback very seriously 😃! If you want to be heard, and if you want to help us to further enhance the documentation, please participate in our doc survey 2023 (hosted on the Qualtrics platform). Thank you 💚!

NeuVector by SUSE release 5.2 is now available!

Thursday, 6 July, 2023

I am pleased to announce the availability of version 5.2 of the NeuVector container security platform. This release packs a significant number of valuable enhancements and bug fixes for users requiring full-lifecycle security for their Kubernetes container pipeline and deployments. 

Vulnerability scanning and admission controls are critical NeuVector features for ensuring supply chain security. In NeuVector 5.2, users can require NeuVector to verify that images are signed by specific parties before they can be deployed into the production environment, through an integration with Sigstore/Cosign. Scanning enhancements include a pluggable Harbor adapter, a new CVE database lookup service, and scanning of Golang dependencies.

As we recently announced, NeuVector 5.2 supports monthly usage-based billing through the AWS Marketplace and will be followed by similar billing options through Google Cloud and Microsoft Azure. As we see increased public cloud and hybrid cloud usage for business-critical workloads, our customers are requesting convenient billing options for NeuVector subscriptions. 

We continue to enhance the security of NeuVector itself by supporting token-based access to the REST API (in addition to username/password), admin controls of user sessions and passwords, encrypted (TLS) SYSLOG alerts, and distinct least-privileged permissions for each of the NeuVector containers. 

NeuVector 5.2 also continues support of regulated and government use cases where customizable login banners, logos and agreements as well as classification headers and footers ensure proper access to NeuVector. 

Other enhancements for NeuVector paid subscription customers include: 

  • A new Vulnerability (CVE) Database lookup service.  This new SaaS service provides an online database lookup for any CVE in the latest NeuVector CVE database. It also provides views of vulnerabilities by OS, application, package or library as well as unfixed vulnerabilities. This service can be accessed by requesting it through the SUSE Customer Center (SCC) and the SUSE Collective service for customers.
  • Advanced performance and tuning guide and advice. Also available through SUSE Collective is a new performance-tuning asset to assist customers with properly sizing and tuning deployments of NeuVector in large clusters, edge (constrained resource), or heavy security feature usage environments. Support subscribers can also query the NeuVector support team for assistance as well as engage SUSE professional services for more complex deployments. 

We’re excited to bring these security enhancements to the Kubernetes and container community to help our users worldwide achieve the visibility, protection, and defence in depth needed for critical cloud-native workloads. 

NeuVector is available on docker hub with full documentation available and helm-based installations supported. 

10 Reasons to Migrate from CentOS to openSUSE

Monday, 3 July, 2023

Migrate from CentOS to openSUSE

When it comes to choosing a reliable and powerful Linux distribution for your workloads, CentOS and openSUSE are both popular options. However, recent changes in the CentOS project have left many users seeking alternatives. In this blog post, we will explore ten compelling reasons why migrating from CentOS to openSUSE might be a smart move.

1. Stable and Reliable

openSUSE offers a rock-solid, enterprise-grade operating system that is known for its stability. The openSUSE Leap release, based on SUSE Linux Enterprise (SLE), provides a stable foundation with long-term support. This reliability is crucial for workloads requiring uninterrupted operation.

Are you satisfied with openSUSE Leap and considering a transition to SLE for enterprise support? Simply install a package, and your openSUSE Leap environment will be converted to SUSE Linux Enterprise Server.

Always looking for the latest and greatest? If you prefer a rolling release model you can switch to openSUSE Tumbleweed without having to reinstall.

2. SUSE’s Added Value to openSUSE

Migrating to openSUSE leverages the strengths of SUSE in several key areas:

  • Quality Assurance: on top of openQA extensive suite of testing automation, openSUSE inherits SUSE’s rigorous hand-crafted QA, ensuring reliability and performance.

  • EAL4+ Certification:  SUSE Linux Enterprise’s impressive EAL4+ security achievements are evident in openSUSE, which attains the utmost security standards among community Linux distributions.

  • Hardware Compatibility: In a similar vein, SUSE Linux Enterprise customers benefit from an array of comprehensive hardware certifications, which in turn provides openSUSE users with a higher level of hardware support compared to other community Linux distributions.

  • Cloud Integration: SUSE’s affiliations with major Cloud Providers, such as AWS, Azure, and GCP, facilitate seamless deployment of openSUSE in the cloud.

3. Open Build Service (OBS)

openSUSE’s Open Build Service allows users to create and package software for various distributions. With OBS, you can easily customize and build packages tailored to your specific requirements. This flexibility makes it easier to maintain and deploy software across your workloads.

OBS is not just limited to software packages; it can also be employed to create customized versions of openSUSE Leap itself. This is especially useful for those who need tailored distributions targeting a wide range of architectures and platforms.

4. YaST Control Center

openSUSE’s YaST (Yet another Setup Tool) Control Center is a powerful, user-friendly system management tool that has been around since 1996! It provides an intuitive graphical interface for managing system settings, network configurations, software installations, and more. YaST simplifies the administration of your workloads, saving time and effort.

5. Btrfs File System

openSUSE supports the Btrfs file system, offering advanced features like snapshots, subvolumes, and RAID support. These capabilities enhance data integrity, simplify backup and recovery, and improve overall system performance.

Btrfs, combined with Snapper, allows for full system rollbacks and offers powerful ‘Time Machine-like’ features.

6. Transactional OS support

openSUSE Leap can also be configured to operate in a transactional model by using the Transactional Server system role during installation. The Transactional Server role is designed for environments where the system requires remote updates and management. This is especially useful in scenarios where you want to minimize the maintenance window and ensure high availability.

7. Container and Virtualization Support

openSUSE stands out for its extensive support for containers and virtualization. Whether using Docker, containerd or Podman for containerization, or KVM and Xen for virtualization, openSUSE ensures streamlined and efficient handling of your workloads. Additionally, openSUSE MicroOS, optimized for containers and integrating Kubernetes, presents a specialized option for container orchestration. This broad support makes openSUSE Leap and its derivates adaptable and versatile for various deployment strategies.

8. Rich Package Repository

openSUSE boasts a vast package repository with thousands of pre-built software packages. Whether you need development tools, databases, or specialized software, openSUSE’s repository is likely to have what you need. This extensive collection simplifies software installation and dependency management.

9. Flexibility and Customization

On top of the extensive number of packages included, openSUSE offers a high degree of flexibility and customization options. You can choose from various desktop environments, such as KDE Plasma or GNOME, allowing you to tailor the user experience to your preferences. Additionally, openSUSE provides fine-grained control over system configurations, enabling you to optimize your workloads.

Furthermore, openSUSE supports both Ansible and Salt, enabling you to automate your configurations with ease.

10. Active Community

And last but not least, openSUSE has a vibrant and supportive community of users and contributors. The community provides forums, mailing lists, and other platforms for sharing knowledge and seeking assistance. Joining this community allows you to tap into a vast pool of expertise and collaborate with like-minded individuals.


Migrating from CentOS to openSUSE brings numerous benefits, including stability, SUSE’s support to the community, powerful system management tools, advanced distro features, and access to a rich package repository. The active openSUSE community and easy migration tools further enhance the transition process. If you are seeking a robust and reliable Linux distribution for your workloads, you should consider openSUSE.

Looking for further insights into what you can achieve by migrating to SUSE and openSUSE?, check out other blogs in this series:

 


Ready to experience the power and flexibility of openSUSE Leap?

Download openSUSE Leap Now!

It’s THE time: SUSE doc survey 2023 ‘call to action’

Thursday, 29 June, 2023

You might already have noticed: I never tire to emphasize that documentation is an essential part of any product. This is especially true for enterprise software which covers many use cases. Most software solutions only become usable thanks to detailed documentation. We’ve got direct feedback from you, our customers and partners, how much you rely on documentation to get your tasks done. Being responsible for a functioning IT environment and smooth processes, missing or poor documentation can impact your daily work and even the success of your business.

Shaping the docs

Our high demands to ourselves are to produce and deliver high quality documentation and localization services. But we need your help: you are using the docs for your daily work, you know where they are lacking, or where they are good enough. Thus, every year we–the SUSE (BCL) documentation team–conduct a global survey to gather your concrete feedback. Of course, we do not just “listen to” and “read about” your insights. We want to understand how to provide you better services. Because our goal is to continuously improve the documentation to make it easier for you to use.

Some actions taken

Thanks to your survey feedback of the last few years, we have already been able to improve a number of things. One major pain point with the docs, so the survey findings, was the ‘ease-of-use’. Another requirement was to focus more on the explanation of specific solutions and use cases. In 2021, we enhanced the online appearance of our documents for better user experience. We introduced a three-column design and several useful additional functionalities for an easier navigation. Additionally, we started to work on a new approach to documentation.

Smart new approach

Even if huge monolithic product manuals and guides are still useful and might never completely vanish, we realize that they are difficult to consult during your daily business. The SmartDocs are a project to address, among others, the above mentioned requirements for general ease of use and enhanced navigation capability. Our first priority hereby lies with the user not with the product. Thus we

  • try to write in a way that helps you complete tasks, instead of lecture you about the product,
  • focus on our users’ needs, and not on the product’s features,
  • and aim to instruct you how to solve a problem, and not just how to use the product interface.

Our goal is to move more and more towards modular documentation. Providing solution- or topic-based information (as we already do with the SUSE Best Practices and the Technical Reference Documentation) will speed up productivity.

Another important aspect is further improved navigation options. Modularity helps to enhance readability and makes consumption of topics easier for you. And finally, small articles have better search engine results than huge manuals. Also, they can easily be consumed by AI tools. We already see that our SmartDocs rank higher in Google search results. So, in future, when you are searching for content or for help, you hopefully will have the respective information quickly at your fingertips.

Striving for perfection

Well – let me be clear: we are not living in a perfect world. And there will never be perfection in the software business 😁. However, we will never stop pushing things forward. And to understand what information is vital to the usage of SUSE products and solutions, how the requirements evolve, and what we could do better in future. To keep us going, we heavily depend on feedback from … YOU!

Sharing is caring

So, this call goes out to the whole ecosystem: Please donate a small amount of your time and fill out the survey. Be assured, everyone will benefit. Let’s countercheck with you some of the actions we’ve taken so far. Tell us where you still see lacks and gaps in the documentation. Feel free to let us know what you think is good 😉. And simply share your thoughts.

This is possible in many different languages. And it’s easy: choose from preset options, share additional comments if you feel like it, and provide the level of feedback that you are comfortable with. The survey is open for participation until the end of September. As a small token of our appreciation, we raffle off some SUSE gift packages. Now, just have a look yourself:

And don’t forget to have a lot of fun!

 

Disclaimer: The text at hand has not been reviewed by a native speaker. If you find typos or language mistakes, please send them to me (meike.chabowski@suse.com) – or if you like them, just keep them and feed them. 😁

Empowering retailers to innovate and scale fast

Thursday, 29 June, 2023

Guest blog – Inside the partnership: Flooid and SUSE


Retailers moving to the ‘store of the future’ require new levels of speed, flexibility and open innovation, without compromising on security or performance. In this hyper-connected world, success depends upon transferring real-time information seamlessly between the cloud, the store estate and tens of thousands of devices at the edge.

Flooid is equipping its customers to reap the benefits of this interoperable, information-rich, agile environment. And as part of our value-add approach, we ensure our clients have access to best-of-breed partners and technology, so they can stay ahead of both the competition and customer demands.

One of our key ecosystem partners is SUSE, a technology leader that enhances open innovation through Enterprise Linux solutions, edge-enablement and containerization services. SUSE Account Manager Jeff Mazar explains more.

Jeff, how do Flooid and SUSE work together?
“We’re a true partnership, one in which each technology provider enhances the performance of the other. Flooid is a leading unified-commerce platform provider, helping Top Tier retail leaders to transform and innovate in the way that they sell to customers — often across multiple retail verticals and almost always across physical and digital channels. SUSE complements Flooid with opensource capabilities, and an additional layer of security and resilience across critical systems, ensuring near-zero vulnerabilities during periods of rapid digital transformation.

Since 2008, we’ve been working with Flooid to help retailers change the way they operate in line with industry trends. Our partnership is truly global. We’ve worked together on projects for large retail supermarket, convenience and fast-moving consumer goods businesses in the UK, South Africa, Canada, the USA, and specialty businesses in Europe.”

How do retailers benefit from the partnership?
“The common strand is we enable retailers to evolve the way they sell and serve the customer, faster, more securely and with reduced cost and risk. Every ambitious retailer is trying new concepts and looking to scale new capabilities, such as checkout-less stores, artificial intelligence, self-service models and modern inventory techniques. With Flooid and SUSE, retailers benefit from a unified, adaptable commerce platform, and the open architecture that gives them the agility they need to try, test and scale new concepts.”

Can you explain a little more about the technology?
“In a nutshell, Flooid provides the omnichannel point-of-service (POS) technology, and SUSE provides the Linux operating system on which the POS runs. Business-critical workloads move across cloud and resilient on-premise devices while containerized applications maximize development agility — including in edge use cases, which are of course of increasing interest to ambitious retailers.”

What are the key challenges for retailers now?
“Customer expectations are evolving faster than ever. This necessitates new technical solutions, but to innovate at the speed required, retailers need a flexible core, rather than a series of monolithic systems that operate in siloes. Going digital is not enough; because much customer experience differentiation will take place at the edge, retailers must adopt end-to-end infrastructure that can cope with real-time transactions and insights from tens or hundreds of thousands of endpoints. Business critical applications, such as the POS platform or ERP, need to stay up and running around the clock. And in straightened post-pandemic conditions, retailers need to find a way to ‘get to great’, while driving down cost, and ensuring no disruption to business as usual. That’s where Flooid and SUSE can help.”

Where does the cloud fit in?
“Retailers are at different stages of their cloud journeys. It’s true that the cloud aids flexibility, can drive down costs, and increase security, but perhaps its most striking feature is its ability to help retailers to innovate faster. With SUSE, Flooid and Flooid’s cloud partner Google, retailers can use open-source technologies to make the difference for their business. Containers, essential to running cloud-native applications for core to cloud to edge, can be easily managed with SUSE solutions. Kubernetes – the de facto standard for developing and running software – can also be utilized.”

What are SUSE’s core value propositions for the retail industry?
“With SUSE, you are a step closer to realizing your store of the future. As an opensource edge solutions provider, we support retail customer experience transformation by enabling agile, secure, and optimized infrastructure. Our purpose is to give customers the freedom to innovate everywhere. We put the ‘open’ back in open source, giving customers the ability to tackle innovation challenges today and the freedom to evolve their strategy and solutions tomorrow.”

Any final words on the Flooid/ SUSE partnership?
“Transformation is not just an option for retailers – it’s a necessity. From fixed POS systems to fully autonomous shopping experiences, each retailer is at a different stage on their transformation journey. But every retailer shares common goals – to delight consumers, drive efficiencies and open new revenue generating opportunities. All face a similar challenge; to securely manage end-to-end infrastructure while grappling with budget, legacy IT, and integration constraints. Flooid and SUSE believe and have demonstrated that transformation can be enhanced and accelerated by open innovation. The most successful retailers will be those that capitalize on interoperable solutions that let them harness modernization, no matter where it occurs. Successful retailers will be those that can unlock technological potential to create their store of the future — working with Flooid and SUSE puts them on the right path.”

https://www.suse.com/sector/retail/

 

Fleet: Multi-Cluster Deployment with the Help of External Secrets

Wednesday, 21 June, 2023

Fleet, also known as “Continuous Delivery” in Rancher, deploys application workloads across multiple clusters. However, most applications need configuration and credentials. In Kubernetes, we store confidential information in secrets. For Fleet’s deployments to work on downstream clusters, we need to create these secrets on the downstream clusters themselves.

When planning multi-cluster deployments, our users ask themselves: “I won’t embed confidential information in the Git repository for security reasons. However, managing the Kubernetes secrets manually does not scale as it is error prone and complicated. Can Fleet help me solve this problem?”

To ensure Fleet deployments work seamlessly on downstream clusters, we need a streamlined approach to create and manage these secrets across clusters.
A wide variety of tools exists for Kubernetes to manage secrets, e.g., the SOPS operator and the external secrets operator.

A previous blog post showed how to use the external-secrets operator (ESO) together with the AWS secret manager to create sealed secrets.

ESO supports a wide range of secret stores, from Vault to Google Cloud Secret Manager and Azure Key Vault. This article uses the Kubernetes secret store on the control plane cluster to create derivative secrets on a number of downstream clusters, which can be used when we deploy applications via Fleet. That way, we can manage secrets without any external dependency.

We will have to deploy the external secrets operator on each downstream cluster. We will use Fleet to deploy the operator, but each operator needs a secret store configuration. The configuration for that store could be deployed via Fleet, but as it contains credentials to the upstream cluster, we will create it manually on each cluster.
Diagram of ESO using a K8s namespace as a secret store
As a prerequisite, we need to gather the control plane’s API server URL and certificate.

Let us assume the API server is reachable on “YOUR-IP.sslip.io”, e.g., “192.168.1.10.sslip.io:6443”. You might need a firewall exclusion to reach that port from your host.

export API_SERVER=https://192.168.1.10.sslip.io:6443

Deploying the External Secrets Operator To All Clusters

Note: Instead of pulling secrets from the upstream cluster, an alternative setup would install ESO only once and use PushSecrets to write secrets to downstream clusters. That way we would only install one External Secrets Operator and give the upstream cluster access to each downstream cluster’s API server.

Since we don’t need a git repository for ESO, we’re installing it directly to the downstream Fleet clusters in the fleet-default namespace by creating a bundle.

Instead of creating the bundle manually, we convert the Helm chart with the Fleet CLI. Run these commands:

cat > targets.yaml <<EOF
targets:
- clusterSelector: {}
EOF

mkdir app
cat > app/fleet.yaml <<EOF
defaultNamespace: external-secrets
helm:
  repo: https://charts.external-secrets.io
  chart: external-secrets
EOF

fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - external-secrets app > eso-bundle.yaml

Then we apply the bundle:

kubectl apply -f eso-bundle.yaml

Each downstream cluster now has one ESO installed.

Make sure you use a cluster selector in targets.yaml, that matches all clusters you want to deploy to.

Create a Namespace for the Secret Store

We will create a namespace that holds the secrets on the upstream cluster. We also need a service account with a role binding to access the secrets. We use the role from the ESO documentation.

kubectl create ns eso-data
kubectl apply -n eso-data -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: eso-store-role
rules:
- apiGroups: [""]
  resources:
  - secrets
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - authorization.k8s.io
  resources:
  - selfsubjectrulesreviews
  verbs:
  - create
EOF
kubectl create -n eso-data serviceaccount upstream-store
kubectl create -n eso-data rolebinding upstream-store --role=eso-store-role --serviceaccount=eso-data:upstream-store
token=$( kubectl create -n eso-data token upstream-store )

Add Credentials to the Downstream Clusters

We could use a Fleet bundle to distribute the secret to each downstream cluster, but we don’t want credentials outside of k8s secrets. So, we use kubectl on each cluster manually. The token was added to the shell’s environment variable so we don’t leak it in the host’s process list when we run:

for ctx in downstream1 downstream2 downstream3; do 
  kubectl --context "$ctx" create secret generic upstream-token --from-literal=token="$token"
done

Assuming we have the given kubectl contexts in our kubeconfig. You can check with kubectl config get-contexts.

Configure the External Secret Operators

We need to configure the ESOs to use the upstream cluster as a secret store. We will also provide the CA certificate to access the API server. We create another Fleet bundle and re-use the target.yaml from before.

mkdir cfg
ca=$( kubectl get cm -n eso-data kube-root-ca.crt -o go-template='{{index .data "ca.crt"}}' )
kubectl create cm --dry-run=client upstream-ca --from-literal=ca.crt="$ca" -oyaml > cfg/ca.yaml

cat > cfg/store.yaml <<EOF
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
  name: upstream-store
spec:
  provider:
    kubernetes:
      remoteNamespace: eso-data
      server:
        url: "$API_SERVER"
        caProvider:
          type: ConfigMap
          name: upstream-ca
          key: ca.crt
      auth:
        token:
          bearerToken:
            name: upstream-token
            key: token
EOF

fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - external-secrets cfg > eso-cfg-bundle.yaml

Then we apply the bundle:

kubectl apply -f eso-cfg-bundle.yaml

Request a Secret from the Upstream Store

We create an example secret in the upstream cluster’s secret store namespace.

kubectl create secret -n eso-data generic database-credentials --from-literal username="admin" --from-literal password="$RANDOM"

On any of the downstream clusters, we create an ExternalSecret resource to copy from the store. This will instruct the External-Secret Operator to copy the referenced secret from the upstream cluster to the downstream cluster.

Note: We could have included the ExternalSecret resource in the cfg bundle.

kubectl apply -f - <<EOF
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: database-credentials
spec:
  refreshInterval: 1m
  secretStoreRef:
    kind: SecretStore
    name: upstream-store
  target:
    name: database-credentials
  data:
  - secretKey: username
    remoteRef:
      key: database-credentials
      property: username
  - secretKey: password
    remoteRef:
      key: database-credentials
      property: password
EOF

This should create a new secret in the default namespace. You can check the k8s event log for problems with kubectl get events.

 

We can now use the generated secrets to pass credentials as helm values into Fleet multi-cluster deployments, e.g., to use a database or an external service with our workloads.

The path to a more secure SAP platform with a comprehensive guide for safeguarding your SAP

Thursday, 8 June, 2023

Organizations are concerned about the security of our SAP system, as it is the backbone of our business operations. They recognize the importance of having a secure SAP environment. However, they are unsure where to begin to achieve the goal of a more secure SAP platform. How should you design your SAP platform to guarantee a higher level of security? What are the key aspects that you need to consider? These are the crucial questions addressed in this new e-book titled “The Gorilla Guide to A Secure SAP Platform: How to Secure Your SAP Platform.” a comprehensive guide for safeguarding your SAP platform.

"A Secure SAP Platform Gorilla Guide" cover
The e-book is a comprehensive guide for safeguarding your SAP platform. It covers the pillars of a secure SAP platform and the importance of adhering to best practices. Explaining why it is essential to have hardened systems, utilize management and monitoring tools, and continuously validate best practices when deploying secure systems to ensure the SAP platform’s effectiveness and reliability. We’ll also introduce new concepts like the patching paradox, how it affects SAP systems and solutions to overcome it. For more information on the patching paradox, check out my blog post titled “Solving the patching paradox challenge: How important is it to enforce a security policy in an SAP environment.”

We elaborate on the key topics and explain how a leading provider of enterprise-grade Linux and open-source solutions, like SUSE, helps in each subject. This guide equips you with the knowledge and tools to fortify and protect your SAP infrastructure from potential threats.

The guide will provide actionable insights, expert advice, and real-world examples to empower you to secure your SAP platform effectively. With “The Gorilla Guide to A Secure SAP Platform,” you’ll gain the knowledge to mitigate risks, protect your critical business data, and ensure the integrity of your SAP operations.

We emphasize the importance of the SAP platform and show you solutions to issues as well as some tricks and tips. Of course, we include how to have an OS endorsed by SAP, like SUSE Linux Enterprise Server for SAP applications, which is the platform’s foundation.
Here’s a glimpse into the chapters covered in “The Gorilla Guide to A Secure SAP Platform”:

  • Chapter 1 – Introduction to SAP security: Gain a solid foundation in SAP security by understanding the security pyramid, exploring the various components of SAP security, and identifying the top threats to your SAP platform.
  • Chapter 2 – Building Blocks for a Secure SAP Platform: Learn about the crucial building blocks that form the foundation of a secure SAP platform, including platform security, compliance, and reliability, with insights and solutions from SUSE.
  • Chapter 3 – Keeping Up with Patches and Updates: Discover the importance of regular patching and updates, and establish effective policies to ensure the timely application of necessary fixes, including best practices provided by SUSE.
  • Chapter 4 – Vulnerability Management: Understand the difference between patches and vulnerabilities, explore the characteristics of vulnerabilities, and learn how to catalog and remediate them efficiently, leveraging SUSE’s expertise.
  • Chapter 5 – Improving on Limited Visibility: Enhance your visibility into SAP configurations, performance, and infrastructure changes to detect and address potential security gaps with insights from SUSE’s innovative solutions.
  • Chapter 6 – Secure SAP Best Practices: Implement best practices for minimizing the attack surface, deploying firewalls, enabling data encryption, and adopting effective patching and live patching strategies, leveraging SUSE’s comprehensive security solutions.
  • Chapter 7 – The Role of Management and Automation Tools: Discover the crucial role of management and automation tools in ensuring server lifecycle management, security management, and SAP performance monitoring, with insights and solutions provided by SUSE like SUSE Manager and projects like Trento.
  • Chapter 8 – Challenges of a Secure SAP Environment in Public Clouds: Navigate the specific challenges of securing an SAP environment in popular public cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud Platform, with guidance from SUSE’s cloud security solutions.
  • Chapter 9 – SAP Hardening Guidelines: Explore comprehensive hardening guidelines for SAP HANA® systems, including security settings, firewalls, disk encryption, package selection, and container security, with expertise and solutions from SUSE.
  • Chapter 10 – Next Steps: Wrap up your journey through this guide with practical recommendations for the next steps to strengthen your SAP platform security, including further collaboration with SUSE.

This guide will provide you with a better understanding of the importance of maintaining a secure SAP platform and how OS like SUSE Linux Enterprise Server for SAP applications and management tools like SUSE Manager can help achieve this goal. Read more about how to have a more secure sap platform on www.suse.com/secure-sap and download the guide here: more.suse.com/Secure_SAP_Guide

Get ready for SUSE Manager 4.3.6!

Tuesday, 9 May, 2023

Get Ready for SUSE Manager 4.3.36

In just a few weeks, SUSE Manager 4.3.6 will be available for you to download!  While this is not a major release, plenty of new features come with it. There are many reasons to upgrade to this new release as soon as possible.  Let’s talk about a few:

Freedom of choice

SUSE Manager is the only infrastructure management solution that manages more than 15 different Linux distributions.  And with this newest release, we will add to that list with the management of all the RHEL 9 variations (Rocky Linux, Alma Linux, RHEL 9) and of course support our very own, and latest, SUSE Linux Enterprise 15 SP5.  So if you are running your business critical workloads on SUSE Linux Enterprise Server (SLES), or even if you are not running any SLES at all, isn’t it time you had a single management solution for all your distributions that you can manage from a single console?  As we like to say SUSE Manager manages ANY Linux, ANY Where, at ANY Scale.  Take charge of your IT infrastructure with SUSE Manager and experience real freedom!

Security taken seriously

Security is no laughing matter and understandably so.  According to the IBM Cost of a Data Breach 2022 Report, the average cost of a data breach in the United States is $9.44M – almost twice the global amount of $4.35M.  And, when you consider more than 60% of breaches can be prevented with good patching habits, having an IT management solution that manages all your Linux infrastructure becomes invaluable.

SUSE Manager is that solution.  From automated CVE updates to the ability to scan against SCAP profiles using OpenSCAP, SUSE Manager practically automates security for you.  Use Prometheus and Grafana to monitor your entire infrastructure in real-time and visually display results.  Automate Patch Management and take advantage of Live Patching, in SUSE Linux Enterprise, to minimize downtime.  It’s all available in this single solution.

With this new release, we enable you to install Program Temporary Fixes (PTFs) directly from your SUSE Manager console.  That means no more manually patching servers – saving your admins valuable time.

If your company is taking security seriously, it’s time to upgrade to a better IT Management Solution – SUSE Manager.

Extended support

Who among us has the time to continually update a system?   Yet to get the support, you need to upgrade.  It’s quite a conundrum!  Well, check this out – The SUSE Manager 4.3 series will be supported until June 2025 (Which includes 4.3.6 and upcoming updates).  That means if you upgrade today, you’ll get access to our outstanding Technical Support for more than two years!  This also gives you time to experience our upcoming release of SUSE Manager 5.0, which will have many, new innovative features to make infrastructure management even simpler!

All the updates, patches, PTFs, and new features will be readily available to you with 4.3.6, and even more coming with the upcoming maintenance updates.  And if you are still running 4.2, it’s time to upgrade as support ends October 31, 2023.  And updating from 4.2 to 4.3 is super simple – something that can be completed in less than an hour (depending on the size of your database).

Stay tuned as we’ll have a blog dedicated to upgrading from SUSE Manager 4.2 to SUSE Manager 4.3 in the upcoming weeks.

Management in the cloud – coming soon

SUSE Manager has always been available in the cloud as a Bring Your Own Subscription (BYOS).  Our big news is the introduction of SUSE Manager 4.3.7, we will be offering SUSE Manager as a PAYG offering in the AWS marketplace (followed later by releasing PAYG with Microsoft Azure and Google Cloud).  This allows you to manage infrastructure from the cloud on your terms.

PAYG provides many benefits to you – including the ability for metered usage, scalability on your terms, and single billing from your cloud provider.  More on these options will be featured in a future blog.

Better together – SUMA and Services

Are you ready to get started, but looking for help?  We’ve got the perfect solutions for you.  SUSE is announcing packages to help you reduce your time to production!

Three levels of packages, provide an option for every sized business – but every package includes SUMA subscriptions, SUSE consulting, and eLearning subscriptions.  The enterprise and SAP solution also include access to Premium Support Services – our white glove technical support.  Please see the table below and contact your account executive for more information.

SUSE Manager Solutions

Starter Enterprise SAP
SUSE Manager SUSE Manager Subscriptions to attach and manage a small footprint (30-50 machines)

  • ·Suitable for small environments to make them more efficient
  • ·Suitable in large environments so you can use it in your Lab / Testing to get started
SUSE Manager Subscriptions to attach and manage an important footprint (+100 machines)

 

SUSE Manager Subscriptions capable of managing a typical SLES for SAP deployment (+30 machines)
SUSE Consulting One Week of Consulting Services to deploy and do the initial customization to fit your infrastructure and shorten the time to production Two Weeks of Consulting Services to design and deploy features such as:

  • ·Structured registration (Activation Keys)
  •  Baselines (Dev/Test/Prod stages)
  • ·Base templates
  • Preparation for system patching
Three Weeks of Consulting Services that will provide:

  • Advanced customization of the deployment, including SUSE Manager, SUSE Manager Monitoring, and Trento
  • ·Register systems and prepare SAP specific automation
SUSE Training Two annual subscriptions to SUSE eLearning – providing access to every technical training course SUSE offers. Two annual subscriptions to SUSE eLearning – providing access to every technical training course SUSE offers. Two annual subscriptions to SUSE eLearning – providing access to every technical training course SUSE offers.
SUSE Premium Support Services Not applicable Bronze tier of SUSE Premium Support Services – giving you access to a named premium support engineer and white-glove technical support. Bronze tier of SUSE Premium Support Services – giving you access to a named premium support engineer and white-glove technical support.

This is an exciting time for the SUSE Manager team.  As we get closer to the official launch, we’ll be publishing blog posts on some of the most interesting features of our new release.

And of course, stay tuned for more information on SUSE Manager 5.0 – the next generation of SUSE Manager.