SAP on Google Cloud Platform – Brunch and Learn Series

Thursday, 10 June, 2021

 

 

 

  • Are you moving your SAP workload to Google Cloud?
  • Are you thinking – where do I start?
  • How can you make certain your SAP environment in Google Cloud is resilient, highly available, and consistent from development, QA, and production?
  • Do you want the same stack on-premises as in Google Cloud?

We hear you!

I am delighted to offer “SAP Solutions on Google Cloud Brunch and Learn Series” to answer these questions and more.
In addition, I will demonstrate some of the SUSE automation and monitoring available that comes “out-of-the-box” on the Google Cloud Platform. This is a five-session series for SAP Solutions operating on Google Cloud. I will discuss the following topics:

SAP Solution

Contents

SAP Solutions on Google Cloud Overview How to create and prepare Google Cloud environment to host SAP workload
SAP HANA High Availability Cluster How to plan, deploy and monitor SAP HANA High Availability Cluster on Google Cloud using the Google Cloud Deployment Manager Automation
SAP HANA High Availability Cluster How to plan, deploy and monitor SAP HANA High Availability Cluster on Google Cloud using the SUSE SAP Automation Platform
SAP NetWeaver High Availability Cluster How to plan, deploy and monitor SAP NetWeaver High Availability Cluster on Google Cloud using the SUSE SAP Automation Platform
SAP S/4HANA High Availability Cluster How to plan, deploy and monitor SAP S/4HANA High Availability Cluster on Google Cloud using the SUSE SAP Automation Platform

 

That is not everything! Moving to Google Cloud can be a bit challenging for some, so the first session will focus on the preparation to move to Google Cloud by discussing the following topics:

  • An overview of the SUSE-supported SAP solutions on Google Cloud.
  • The Best practice to prepare SAP environment on Google Cloud.
  • SUSE High Availability Innovation on Google Cloud for SAP environments.

We are keen to reach out to all of you, therefore we offer all sessions in two different time zones:

  • A friendly EMEA and Asia Pacific time zone session
  • A friendly North America and Latin America time zone session

Please pick up the best suitable time for you using the following registration links:

SUSE SAP Automation on Google Cloud

Tuesday, 8 June, 2021

 

We are happy to announce that SUSE SAP Automation v7.2.0 has been released with major enhancements and features for automating SAP solutions on Google Cloud.

For the first time, SAP S/4HANA ENSA2 high availability deployment scenario is supported alongside the currently supported NetWeaver ENSA1 high availability cluster deployment.

SUSE Engineering team has introduced the following beta components and features:

  • GCP Load balancer can be used as a Virtual IP implementation which is Google’s official recommended implementation way.
  • SAP deployments can be provisioned inside a protected network that is not exposed to the internet following SUSE and Google Best practices.

Bug fixing: more than reported 10 GCP-related bugs.
For more info, please refer to https://github.com/SUSE/ha-sap-terraform-deployments/releases/tag/7.2.0
Happy deployment!

The Best Reason to move your SAP workloads to Google Cloud with SUSE

Friday, 22 January, 2021

 

There are many technical reasons to move your SAP workloads to Google Cloud Platform with SUSE. Google owns the network; in fact, 40% of all internet traffic uses its network at some point. A simple selection of Confidential VM for your compute engine gives you an environment where your data remains encrypted in use without changing your application! And you have the convenience of paying for only what you use. And running your workloads on the operating system SAP develops on, tests on, and uses in production can’t hurt. Not to mention the deployment automation SUSE provides along with Live-Patching, and High Availability functions baked into our SAP offering. But these are not the best reasons to move your SAP workloads to Google Cloud Platform with SUSE.

The best reason is… Committed Use Discounts!

What do Committed Use Discounts mean to you, our joint SAP customers? Simply put, you save money! On both your Google infrastructure and on your SUSE Linux Enterprise Server for SAP software. And it is easy to enroll in the discount. You simply sign up and commit to a one or three-year term with 30 days to change your mind. Google bills you monthly during the term for the infrastructure and SUSE software,  with no up-front payment. You get CapEx pricing with OpEx convenience!

Here is a partial list of the benefits of using Committed Use Discounts:

  • CapEx pricing, OpEx convenience with no up-front costs
    • You save up to 64% on the SUSE software
  • Unified support to manage any issues – no guessing on who to call for help
  • Compliance is built-in. No worrying about whether the software installed on too many servers
  • Your SUSE subscription counts towards your Google spend commit

I think you will agree, Committed Use Discounts are the best reason to migrate your SAP workloads to Google Cloud Platform… and the technology rocks. ?

 

For more information, contact us at google@suse.com; we’d love to hear from you!

SUSE at Google Next OnAir

Sunday, 12 July, 2020

Co-branded chameleons
Welcome to SUSE “at” Google Next OnAir!

I must admit, folks here at SUSE were looking forward to Google Next in San Francisco and seeing customers, colleagues, friends, and sharing how the SUSE relationship with Google has grown – in person! We even had a networking breakfast planned for customers and partners. Since we are living in rather strange times of social distancing, temperature taking, and heighten understanding of viral transition; I will attempt to convey some of SUSE – Google news through this blog with links to more information. And you can always email us at google@suse.com with any questions. SUSE is excited about the progress made within the SUSE- Google partnership.

We have been hard at work to make SUSE offerings on Google Cloud Platform the best they can be. Our engineers have been collaborating with Google to make your deployment as smooth and resilient as possible. We used our longstanding, co-innovation, relationship with SAP (fun fact – SUSE runs over 90% of SAP HANA implementations, including SAP’s own, and over 70% of all SAP Linux deployments), to provide things like High Availability and Live Patching to your deployments. And SUSE Manager can be used to keep all your Linux servers updated whether on your site, or in Google Cloud. And that includes Linux distributions that are not SUSE green!

Highlights:

  • SUSE is the first technology partner included in Committed Use Discounts – CapEx pricing with OpEx convenience! Sign-up to get discounted pricing without the upfront expense with 1 and 3 years committed used discounts.
  • SUSE High Availability Extension foundational stack for Google Cloud Platform is now available with its new Fencing Agent and a new Floating IP resource agent.
  • Check out, Things to consider when migrating SAP to GCP, to makes sure you have everything cover in your SAP migration to GCP. Our ecosystem partner, Managecore, who specializes in moving SAP workloads to GCP created this with us.
  • SUSE business on Google Cloud continues to double year-over-year since 2015.

Giving back:

SUSE believes in the power of many, and giving back to the community. Recognizing we live in crazy times, during the 9-weeks of Next OnAir or until the budget runs out, SUSE is running a Global Giving initiative with our SUSE-Google customers. Joint customers can receive $1000 or $3000 to give to a charity of their choice through Global Giving. Please complete the interest form, and we will contact you with details.

I close, hoping all is well with you, your family, and friends. COVID-19 has changed the way we connect but not the need for connection. Make someone’s day and reach out to say hello. We hope to see you at Next in 2021 – we will have lots of cute chameleons for you!

Here is the registration page, if you need to registered for Google Next OnAir.

Google Cloud Kubernetes: Deploy Your First Cluster on GKE

Wednesday, 15 April, 2020

Google, the original developer of Kubernetes, also provides the veteran managed Kubernetes service, Google Kubernetes Engine (GKE).

GKE is easy to set up and use, but can get complex for large deployments or when you need to support enterprise requirements like security and compliance. Read on to learn how to take your first steps with GKE, get important tips for daily operations and learn how to simplify enterprise deployments with Rancher.

In this article you will learn:

  • What is Google Kubernetes Engine?
  • How GKE is priced
  • How to create a Kubernetes cluster on Google Cloud
  • GKE best practices
  • How to simplify GKE for enterprise deployments with Rancher

What is Google Kubernetes Engine (GKE)?

Kubernetes was created by Google to orchestrate its own containerized applications and workloads. Google was also the first cloud vendor to provide a managed Kubernetes service, in the form of GKE.

GKE is a managed, upstream Kubernetes service that you can use to automate many of your deployment, maintenance and management tasks. It integrates with a variety of Google cloud services and can be used with hybrid clouds via the Anthos service.

Google Cloud Kubernetes Pricing

Part of deciding whether GKE is right for you requires understanding the cost of the service. The easiest way to estimate your costs is with the Google Cloud pricing calculator.

Pricing for cluster management

Beginning in June 2020, Google will charge a cluster management fee of $0.10 per cluster per hour. This fee does not apply to Anthos clusters, however, and you do get one zonal cluster free. Billing is calculated on a per-second basis. At the end of the month, the total is rounded to the nearest cent.

Pricing for worker nodes

Your cost for worker nodes depends on which Compute Engine Instances you choose to use. All instances have a one-minute minimum use cost and are billed per second. You are billed for each instance you use and continue to be charged until you delete your nodes.

Creating a Google Kubernetes Cluster

Creating a cluster in GKE is a relatively straightforward process:

1. Setup
To get started you need to first enable API services for your Kubernetes project. You can do this from the Google Cloud Console on the Kubernetes Engine page. Select your project and enable the API. While waiting for these services to be enabled, you should also verify that you’ve enabled billing for your project.

2. Choosing a shell
When setting up clusters, you can use either your local shell or Google’s Cloud Shell. The Cloud Shell is designed for quick startup and comes preinstalled with the kubectl and gcloud CLI tools. The gcloud tool is used to manage cloud functions and kubectl is used to manage Kubernetes. If you want to use your local shell, just make sure to install these tools first.

3. Creating a GKE cluster

Clusters are composed of one or more masters and multiple worker nodes. When creating nodes, you use virtual machine (VM) instances which then host your applications and services.

To create a simple, one-node cluster, you can use the following command. However, note that a single node cluster is not fit for production so you should only use this cluster for testing.

gcloud container clusters create {Cluster name} --num-nodes=1

4. Get authentication credentials for the cluster

Once your cluster is created, you need to set up authentication credentials before you can interact with it. You can do so with the following command, which configures kubectl with your credentials.

gcloud container clusters get-credentials {Cluster name}

Google Cloud Kubernetes Best Practices

Once you’ve gotten familiar with deploying clusters to GKE, there are a few best practices you can implement to optimize your deployment. Below are a few practices to start with.

Manage resource use

Kubernetes is highly scalable but this can become an issue if you scale larger than your available resources. To ensure that you are not creating too many replicas or allowing pods to use too many resources, you can enforce, request and limit policies. These policies can help you ensure that your resources are fairly distributed and can prevent issues due to overprovisioning.

Avoid privileged containers

Privileged containers enable contained processes to gain unrestricted access to your host. This is because a privileged container’s uid is mapped to that of the host. To avoid the security risk that is created by these privileges, you should avoid operating containers in privileged mode whenever possible. You should also ensure that privilege escalation is not allowed.

Perform health checks

Once you reach the production stage, your Kubernetes deployment is likely highly complex and can be difficult to manage. Rather than waiting for something to go wrong and then trying to find it, you should perform periodic health checks.

Health checks are a way of verifying that your components are working as expected. These checks are performed with probes, like the readiness and liveness probes.

Containers should be stateless and immutable

While you can use stateful applications in Kubernetes, it is designed for use with stateless processes. Stateless processes do not include persistent memory and contained data only exists while your container does. For data to be retained, containers must be attached to external storage.

Ideally, your containers should be both immutable and stateless. This enables Kubernetes to smoothly take down or replace containers as needed, reattaching to external storage as needed.

The immutable aspect means that a container does not change during its lifetime. If you need to make changes, such as updates or configuration changes, you make the change as needed and then build a new image to deploy. There is an option to get around this for some configuration, however. Using ConfigMaps and Secrets, you can externalize your configuration. From there you can make changes without needing to rebuild your image after each change.

Use role-based access control (RBAC)

RBAC is an efficient and effective way to manage permissions within your deployment. In GKE, it is applied as an authorization method that is layered on the Kubernetes API.

With RBAC, all access is denied by default and it is up to you to define granular permissions to individual users. Keep in mind, any user roles you create only apply to one namespace. To work across namespaces, you need to define cluster roles.

Simplify monitoring

Monitoring and logging events is a requirement for proper management of your applications. Commonly, monitoring in Kubernetes is done through Prometheus, a built-in integration that enables you to automatically discover services and pods.

Prometheus works by exposing metrics to an HTTP endpoint. These metrics can then be ingested by the monitoring tool of your choice. For example, you can use a tool like Stackdriver, which includes its own Prometheus version.

Simplifying GKE for Enterprise Deployments with Rancher

Rancher is a Kubernetes management platform that simplifies setup and ongoing operations for Kubernetes clusters at any scale. It can help you run mission critical workloads in production and easily scale up Kubernetes with enterprise-grade capabilities.

Rancher enhances GKE if you also manage Kubernetes clusters on different substrates—including Amazon’s Elastic Kubernetes Service (EKS) or the Azure Kubernetes Service (AKS), on-premises or at the edge. Rancher lets you centrally configure policies on all clusters. It provides the following capabilities beyond what is offered in native GKE:

Centralized user authentication and role-based access control (RBAC)

Rancher integrates with Active Directory, LDAP and SAML, letting you define access control policies within GKE. There is no need to maintain user accounts or groups across multiple platforms. This makes compliance easier and promotes self service for Kubernetes administrators — it is possible to delegate permission for clusters or namespaces to specific administrators.

Consistent, unified experience across cloud providers

Alongside Google Cloud, Rancher supports AWS, Azure and other cloud computing environments. This allows you to manage Kubernetes clusters on Google Cloud and other environments using one pane of glass. It also enables one-click deployment across all your clusters of Istio, Fluentd, Prometheus and Grafana, and Longhorn.

Comprehensive control via one intuitive user interface

Rancher allows you to deploy and troubleshoot workloads consistently, whether they run on Google Cloud or elsewhere, and regardless of the Kubernetes version or distribution you use. This allows DevOps teams to become productive quickly, even when working on Kubernetes distributions or infrastructure providers they are not closely familiar with.

Enhanced cluster security

Rancher allows security teams to centrally define user roles and security policies across multiple cloud environments, and instantly assign them to one or more Kubernetes clusters.

Global app catalog and multi-cluster apps

Rancher provides an application catalog you can use across numerous clusters, allows you to easily pick an application and deploy it on a Kubernetes cluster. It also allows applications to run on several Kubernetes clusters.

Learn more about the Rancher managed Kubernetes platform.

Tags: ,, Category: Products, Rancher Kubernetes Comments closed

Running Google Cloud Containers with Rancher

Monday, 13 April, 2020
Read our free white paper: How to Build a Kubernetes Strategy

Rancher is the enterprise computing platform to run Kubernetes on-premises, in the cloud and at the edge. It’s an excellent platform to get started with containers or for those who are struggling to scale up their Kubernetes operations in production. However, in a world increasingly dominated by public infrastructure providers like Google Cloud, it’s reasonable to ask how Rancher adds value to services like Google’s Kubernetes Engine (GKE).

This blog provides a comprehensive overview on how Rancher can help your ITOps and DevOps teams who are invested in Google’s Kubernetes Engine (GKE) but also looking to diversify their capabilities through on-prem, additional cloud providers or with edge computing.

Google Cloud (sometimes referred to as GCP) is a leading provider of computing resources for deploying and operating containerized applications. Google Cloud continues to grow rapidly: they recently launched new cloud regions in India, Qatar, Australia and Canada. That makes a total of 22 cloud regions across 16 countries, in support of their growing number of users.

As the creators of Kubernetes, Google has a rich history in its container offerings, design and community. Google Cloud’s GKE service was the first managed Kubernetes service on the market — and is still one of the most advanced.

GKE has quickly gained popularity with users because it’s designed to eliminate the need to install, manage and operate your Kubernetes clusters. GKE is particularly popular with developers because it’s easy to use and packed with robust container orchestration features including integrated logging, autoscaling, monitoring and private container registries. ITOps teams like running Kubernetes on Google Cloud because GKE includes features like creating or resizing container clusters, upgrading container clusters, creating container pods and resizing application controllers.

Despite its undeniable convenience, if an enterprise chooses only Google Cloud Container services for all their Kubernetes needs, they’re locking themselves into a single vendor ecosystem. For example, by choosing Google Cloud Load Balancer for load distribution, Google Cloud Container Registry to manage your Docker images or Anthos Service Mesh with GKE, a customer’s future deployment options narrow. It’s little wonder that many GKE customers look to Rancher to help them deliver greater consistency when pursuing a multi-cloud strategy for Kubernetes.

The Benefits of Multi Cloud

As the digital economy grows, cloud adoption has increasingly become the norm across organizations from large-scale enterprise to startups. In a recent Gartner survey of public cloud users, 81 percent of respondents said they were already working with two or more cloud providers.

So, what does this mean for your team? By leveraging a multi-cloud approach, organizations are avoiding vendor lock-in, thus improving their cost savings and creating an environment that fosters agility and performance optimization. You are no longer constrained to the functionalities of GKE only. Instead, multi-cloud enables teams to diversity their organization’s architecture and provide greater access to best-in-class technology vendors.

The shift to multi-cloud has also influenced Kubernetes users. Users are mirroring the same trends by architecting their containers to run on any certified Kubernetes distribution – shifting away from the single vendor strategy. By taking a multi-cloud approach to your Kubernetes environment and using an orchestration tool like Rancher, your team will spend less time managing specific platform workflows and configurations and more time optimizing your applications and containers.

Google Cloud Containers: Using Rancher to Manage Google Kubernetes Engine

Rancher enhances your container orchestration with GKE as it allows you to easily manage Kubernetes clusters across multiple providers, whether it’s on EKS, AKS or with edge computing. Rancher’s orchestration tool is integrated with workload management capabilities, allowing users to centrally configure policies across all their clusters and ensure consistency across their environment. These capabilities include:

1) Streamlined administration of your Kubernetes environment

Compliance requirements and administration of any Kubernetes environment is a key functionality requirement for users. With Rancher, consistent role-based access control (RBAC) is enforced across GKE and any other Kubernetes environments through its integration with Active Directory, LDAP or SAML-based authentication.

By centralizing RBAC, administrators of Kubernetes environments are reducing the overheads required to maintain user or group profiles across multiple cloud platforms. Rancher makes it easier for administrators to manage any compliance requirements as well as enabling the ability for self-administration from users of any Kubernetes cluster or namespace.

RBAC controls in Rancher
RBAC controls in Rancher

2) Comprehensive control from an intuitive user interface

Troubleshooting errors and maintaining control of the environment can become a bottleneck as your team matures in its usage of Kubernetes and continually builds more containers while deploying more applications. By using Rancher, teams have access to an intuitive web user interface that allows them to deploy and troubleshoot workloads across any Kubernetes provider’s environment within the Rancher platform.

This means less time required by your teams to figure out the operational nuances of each provider and more time building, all team members using the same features and configurations and ability for new team members to quickly launch applications into production across your Kubernetes distribution.

Multi-cluster management with Rancher
Multi-cluster management with Rancher

3) Secure clusters

With complex technology environments and multiple users, security is a core requirement for any successful enterprise-grade tool. Rancher provides administrators and their security teams with the ability to define and control how users of the tool should interact with the Kubernetes environment they are managing via policies. For example, administrators can customize how containerized workloads operate across each environment and infrastructure provider. Once these policies are defined, they can be assigned across to any cluster within the Kubernetes environment.

Adding custom pod security policies
Adding custom pod security policies

4) A global catalog of applications and multi-cluster applications

Get access to Rancher’s global network of applications to minimize your team’s operational requirements across your Kubernetes environment. Maximize your team’s productivity and improve your architecture’s reliability by integrating these multi-cluster applications into your environment.

Selecting multi-cluster apps from Rancher’s catalog
Selecting multi-cluster apps from Rancher’s catalog

5) Streamlined day-2 operations for multi-cloud infrastructure

Once you’ve  provisioned Kubernetes clusters in a multi-cloud environment with Rancher, your operational requirements moving forward are streamlined through Rancher. From day 2, the operation of your environment is centralized in Rancher’s single pane of glass, providing users with the accessibility to push-button deployments including upstream Istio for service mesh, FluentD logging, Prometheus and Grafana  for observability and Longhorn for highly available persistent storage.

Added to these benefits, if you ever decide to stop using Rancher, we provide a clean uninstall process for imported GKE clusters so that you can manage them independently as if we were never there.

Although a single cloud platform like GKE is often sufficient, as your architecture becomes more complex, selecting the right cloud strategy becomes critical to your team’s output and performance. A multi-cloud strategy incorporating an orchestration tool like Rancher can remove technical and commercial limitations seen in single cloud environments.

Read our free white paper: How to Build a Kubernetes Strategy
Tags: , Category: Products, Rancher Kubernetes Comments closed

Going to Google Next? Yes, SUSE will be there!

Thursday, 7 March, 2019

Are you headed to Google Next in San Francisco this April? Consider yourself invited to the SUSE exhibit in booth S1757. Come see how SUSE, the open, open source company provides the foundation for business innovation on Google Cloud Platform.

What to expect:

  • First of all, we’ll be highlighting our open source foundation for SAP S/4 HANA, SUSE Linux Enterprise Server (SLES)
  • Also, we’ll be demonstrating and our Cloud Application Platform on Google Kubernetes Engine

 

All running on Google Cloud Platform!

Additionally, SUSE presents in the Cloud Theater

Hear about how SAP uses open source technology and how SUSE makes open source technology enterprise ready. This is why SUSE is the foundation for over 90% of S/4 HANA customers.

Exciting times for SUSE and Google

  • Our business together is expanding!
  • SUSE is joining Google Technology Partner Program
  • SUSE continues to add technologies to the Google Cloud Marketplace
  • Google will present in the SUSE Theater at SAPPHIRE in May

 

 

Our partnership grows stronger everyday.

#GoogleNext19

UTF8 issue on Google Compute serial console

Tuesday, 16 October, 2018

While trying to use Yast in the Google Compute serial console you will likely notice a UTF8 issue. This will cause the boarders to be drawn with letters instead of dashed lines. In some menus this can be very difficult to read. While the issue has been reported, I wanted to point to a simple solution. Typing ‘export NCURSES_NO_UTF8_ACS=1’ into the bash cli, will fix the format of the Yast menu.

Category: Uncategorized Comments (0)

Examine Public Cloud: SAP HANA on SUSE runs on AWS, MS Azure, and Google Cloud Engine!

Thursday, 24 August, 2017

The conversation around public cloud adoption has significantly matured in recent years. More and more enterprise customers are realizing the benefits of moving their core business applications to the cloud. Companies that move further into the cloud are finding new ways – both inwards- and outward-facing – to innovate, such as by producing better user experience and mobile solutions for a happier and more productive workforce as well as delivering innovative new applications and services for a global audience

It’s only natural that the leading public cloud providers would turn their eye to your SAP landscape – and begin the arms race necessary to win yours over.  AWS (Amazon Web Services) offers a wide selection of and largest memory size of SAP certified on-demand / IaaS infrastructure for SAP HANA.  2016, SAP and Microsoft were certifying SAP’s HANA in-memory database to run development, test and production workloads on Microsoft’s Azure public cloud, including SAP S/4HANA. Since then Microsoft and AWS have been competing on price and performance. One year later, SAPPHIRE 2017, it was announced that Google provides SAP customers and partners with SAP-certified cloud infrastructure to run SAP HANA. With three excellent options in the public cloud space, you are now spoilt for choice.

Before you move your SAP applications to the public cloud, there are a couple of questions you should ask:

  • Are you ready for the cloud?
  • What’s is your true velocity?
  • What’s your view on hybrid cloud?

Examine public cloud for your SAP applications! Download the full SAP Insider article by Eamonn Coleman, Senior Marketing Manager Public Cloud @ SUSE

Whichever you choose, you’ll find that SUSE Linux Enterprise for SAP Applications, the leading operating system for SAP landscapes, is available on-demand across all three public cloud platforms: AWS, Microsoft Azure, and the Google Compute Engine. SUSE solutions for SAP applications will enable customers to achieve the level of reliability, availability, and security you are used to with on premise but in a scalable cloud infrastructure.

Tags: ,,, Category: Uncategorized Comments (0)