Join SUSE in Booth #4011 at AWS re:Invent, Las Vegas, December 2-6th!

Friday, 15 November, 2019

Plan on attending the upcoming AWS re:Invent in Las Vegas, December 2-6th? Add SUSE to your list and visit us in our booth (#4011) in the Partner Expo of the Sands Conference Center. We will be on hand to showcase how to design and run SAP HANA and S/4HANA for high-availability configurations and scale-out scenarios using SUSE Linux Enterprise Server for SAP Applications on the AWS Cloud.

We can walk you through how to provision, deploy, and configure SAP HANA workloads using the SAP HANA and S/4HANA Quick Start reference deployment guides on AWS. Each Quick Start includes AWS CloudFormation templates that automate the deployment of a production-ready environment and a guide that discusses the suggested reference architecture and provides step-by-step deployment instructions using AWS best practices for security and availability. You may also download a new Technical Solution Brief on the Quick Start guides for more detail. You can learn more anytime about SAP solutions on AWS, including how to register for webinars, read case studies and how to get it on the AWS Marketplace by visiting our SAP on AWS page.

If you are looking to accelerate containerized application delivery and lifecycle management of traditional and cloud-native applications, plan to stop by our booth. We will discuss how to manage Kubernetes applications using SUSE Cloud Application Platform on the Amazon Elastic Kubernetes Service (EKS), a managed Kubernetes infrastructure service available from Amazon Web Services.  We will be showcasing the SUSE Cloud Application Platform Quick Start guide on AWS, which allows customers to automatically deploy the SUSE Cloud Application Platform in an Amazon EKS environment on AWS in about an hour. But you do not have to wait until AWS re:Invent! You can download this Quick Start today along with a Solution Brief and request AWS credits to help you get started on the SUSE Cloud Application Platform on AWS Solution Space page.

See you in Vegas!

How AWS combined with the SUSE open source mindset leads to your success

Thursday, 24 October, 2019

Although this isn’t a fairy tale, I’d still like to start a long time ago.

It’s more than one year since we had a meeting with our partner AWS in the SAP Linux Lab. The purpose of this meeting was to address a request from AWS customers, who want to use Adaptive Server Enterprise (ASE) database replication with all its features in the AWS infrastructure, but without needing additional AWS instances. Is there something what we can use from the open source basket that SUSE has?

So what exactly does this mean?

The SAP ASE database has a built-in HA feature to replicate data between a primary and a companion database, which is actually quite similar to what many of you may know from SAP HANA. This SAP ASE  replication uses the Fault Manager (FM) to monitor the replication between the primary and companion database and execute a take-over if necessary, e.g. a primary database failure. Now because the FM is monitoring the database replication, it must of course run somewhere outside of the database replication nodes to ensure the high-availability. Hence a 3rd node is required.

What we did!

We saw the advantages for our customers and said we were interested to work with AWS on such a solution. AWS started by preparing the infrastructure and then the SAP ASE team were involved and assisted in implementing the SAP ASE databases and the replication. During this project the FM was modified to support the new scenario for this deployment. So two of four parts were already done as we, SUSE continued the game. To make the scenario more relevant to SAP customers, a full SAP NetWeaver system was deployed and connected to the SAP ASE DB replication pair. AWS and SUSE now have a new joint paper for a SAP NetWeaver HA solution using SAP ASE:
https://documentation.suse.com/sbp/all/single-html/SAP_HA740_SetupGuide_AWS/

We used this Best Practice and set up a ASCS / ERS cluster. Now I mentioned above that the FM monitors the ASE DB replication, but who monitors the FM itself? What happens if the FM fails? Well, if the FM isn’t available then there is no monitoring and no replication take-over. So what if we could use the existing cluster for ASCS and ERS and implement the FM into that solution too? This turned out to be a great idea since this not only fulfilled the requirement to install the FM outside of the ASE DB hosts. It also meant the SUSE HA cluster could monitor the FM. This making the FM itself HA.
As it turns out this way was all quite easy to do. The flexibility of pacemaker and our sap-suse-cluster-connector gave us the option to add the FM as a new service. The FM now runs and is monitored in a similar way as when we made ASCS and ERS processes highly-available. When the FM first fails, the saphostagent tries to restart the failed FM (sybdbfm) process, but after some retries it gives up.

With the cluster implementation we close this gap and pacemaker takes care and does everything which is necessary to get the FM instance up and running again.

We have documented this advantage as an additional chapter in the AWS SAP NetWeaver HA Guide:
https://documentation.suse.com/sbp/all/single-html/SAP_HA740_SetupGuide_AWS/#_additional_implementation_scenarios

Done, four of four parts 🙂

So in summary your key take-aways should be:
– SUSE HA to protection your ASE DB replication
– joint cooperation between AWS, SAP and SUSE
– opensource product with flexibility and enterprise support
– you request, we adapt, you succeed

This is Manas (AWS), Burkhard and Wajeehuddin (SAP ASE team) and Bernd (SUSE) from the SAP LinuxLab.

Join SUSE, AWS, SAP and Lemongrass in Sydney for a half-day workshop

Tuesday, 15 October, 2019

SUSE, SAP, AWS and Lemongrass are joining forces to bring a free half-day SAP HANA migration workshop to Sydney, Australia!

REGISTER HERE

Any company that has invested in an SAP infrastructure knows its success is critical to business operations. The move to SAP HANA and SAP S/4HANA delivers the real-time operations with reduced complexity that you need for the digital economy, but this means a transformation of your SAP infrastructure. It’s made all the more complicated if you’re also considering a move to the public cloud.

We want to make that simpler.

SUSE, SAP, AWS and Lemongrass are joining forces in Sydney Australia to bring you a half-day SAP HANA migration workshop: SAP+3 Sydney. This workshop offers face to face information and demonstrations from four industry leaders, and is aimed at SAP customers and any others considering a move to SAP S/4HANA.

In a half-day of presentations, interactive discussions and Q&A, you’ll hear from industry experts and gain information that will prove invaluable as you chart a course and execute your SAP S/4HANA plan.

Topics:

  • SAP’s strategy for the most recent HANA 2.0 release and Linux-based infrastructures
  • The latest cloud deployment options for SAP HANA on the AWS cloud
  • Open source software solutions for your SAP workloads that are cloud-centric

 

So if you’re in Sydney and would like a chance to hear from experts about SAP HANA and cloud migration, register now to join us next week, Monday October 21st, for this unique workshop event.

REGISTER HERE

We’ll see you soon!

How Wipro Modernizes Application Delivery for the Retail Industry with SUSE and AWS

Thursday, 29 August, 2019

Wipro case study

We recently published an interesting case study about how Wipro solved an application delivery challenge for their customers using SUSE Cloud Application Platform and Amazon EKS.

WiPro’s Fortune 500 retail industry customers wanted to modernize their application development and delivery processes but were challenged by legacy infrastructure, slow application development cycles, and inability to customize customer experiences at scale. WiPro worked with SUSE and AWS to create a new platform for retail customers that is more scalable, agile, and seamless to operate. Time to deployment reduced by up to 40% and overall deployments increased by 30%. The overall application-release pipeline accelerated by up to 40% through automation. Reduced costs by improving resource utilization-per-service by nearly 30%.

Wipro can now offer digital transformation services that leverage SUSE Cloud Application Platform. Its customers will be in a position to realize greater value from their cloud environment when combining scalable, modular, and highly resilient applications leveraging microservices and containers. This allows them to deliver applications faster and with greater customization, without straining any additional IT resources.

Read the full case study here.

Webinar: Enhance your SAP environment with SUSE running on AWS

Thursday, 1 August, 2019

SUSE and AWS present…

There are always new things to learn about how SUSE works with our partners. Just a few weeks ago, the SUSE and AWS teams in APAC came together to record a brand new webinar focused on how you can achieve high availability and performance with SUSE Linux Enterprise Server for SAP Applications on AWS.

The agility and flexibility offered by public cloud is a fast-growing platform to build, host, and scale the SAP landscape powered by SAP HANA. SUSE can help you achieve near zero downtime and sustain high-performance levels, while AWS delivers a broad and deep set of cloud services that are certified to fulfill the compute, memory, and storage requirements of SAP HANA.

The session covered: 

  • SUSE, AWS and SAP’s history of collaboration
  • SAP HANA-powered landscapes with SUSE Linux Enterprise Server for SAP Applications on AWS
  • How to get started with SUSE resources for SAP and AWS Quick Starts

 

A recording of the full webinar is available on-demand here. And for more information about SUSE on AWS for SAP applications, visit our website.

 

Cloud Native Applications in AWS supporting Hybrid Cloud – Part 2

Wednesday, 31 July, 2019

In my previous post , I wrote about using SUSE Cloud Application Platform on AWS for cloud native application delivery. In this follow-up, I’ll discuss two ways to get SUSE Cloud Application Platform installed on AWS and configure the service broker:

In this blog I am focusing on installing SUSE CAP on AWS using helm charts.

Installing SUSE Cloud Application Platform on AWS:

Note: For all Yaml scripts, please use :set paste in the vi editor and make sure to remove the extra lines and spaces.

Note: for the commands, whenever you got a permission error, please use sudo. And whenever you get error please remove the \ which resembles a newline extra in the command.

1- From your machine install eksctl and AWS CLI:
First install AWS cli:
  • pip3 install awscli --upgrade –user
  • pip install --upgrade pip
  • cp ls ./.local/bin/aws /sbin
Configure AWS:
  • aws configure

Log in to the AWS Console and get the access credentials from IAM service and the AWS region used as well as the output format. It is recommended to do it as JSON.

Install eksctl:
  • curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
  • sudo mv /tmp/eksctl /usr/local/bin

Note: Make sure the timezone and the date/time is setup correctly. If needed, run the below command to fix the time:

  • sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
Install Kubectl:
  • curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
  • chmod +x ./kubectl
  • sudo mv ./kubectl /usr/local/bin/kubectl
2- Create a cluster.

You can have as many workers as you need. SUSE Cloud Application Platform runs on Kubernetes as PODs and the recommendation is to have three worker nodes each in different AZ to support High Availability deployment. For the purposes of this test, we will do it using two worker nodes. The minimum as per SUSE Cloud Application Platform documentation is t2.large with a minimum of 100 GB volume.

The below figure resembles one of the solution recommended for running SUSE Cloud Application Platform in HA mode in AWS.

  • eksctl create cluster --name suse-cap --nodegroup-name suse-cap-node-group --node-type t2.large --node-volume-size 100 --nodes 2 --nodes-min 1 --nodes-max 4 --node-ami auto

Once the cluster is created, you can see that a cloud formation template is created which you can manage in the future to change the number of worker nodes (min and max) as well as the node types.

You many need to configure kubectl if you have already created the cluster from the console or another machine using the below command:

  • aws eks --region XXXX update-kubeconfig --name XXXXX
3- Install Tiller on the AWS EKS

This is because the SUSE Cloud Application Platform deployment is done using Helm Charts, so helm server (tiller) must be first installed

  • cat <<EoF > ~/tiller-rbac.yaml
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    EoF
  • kubectl apply -f ~/tiller-rbac.yaml
  • curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
  • chmod +x get_helm.sh
  • ./get_helm.sh
  • helm init --service-account tiller --upgrade
4- Create the domain which will be used by SUSE Cloud Application Platform using AWS Route53:
  • Login to the AWS Console and select Route53 service. Navigate to Register Domain and select a domain. We will name it in our example susecapaws.org. wait until the domain registration is done to continue with the next steps. Once it is registered you can see the domain in the hosted zones.
5- Create AWS S3 storage for SUSE Cloud Application Platform using the following yaml file (Aws-ebs.yaml):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: aws-ebs
 annotations:
 storageclass.kubernetes.io/is-default-class: "true"
 labels:
 kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
 type: gp2
allowVolumeExpansion: true

then run:

  • kubectl create -f Aws-ebs.yaml
6- Navigate to the EC2 services under the region you used for the creation of the cluster and edit any of the security groups assigned to the work nodes EC2 instances by adding the following ports in the inbound:

Once this is done, then all defined ports are enabled/opened on all cluster worker nodes.

7- Create the SUSE Cloud Application Platform configuration file. It should look like the following (scf-config-values.yaml):
env:
 DOMAIN: susecapaws.org
 UAA_HOST: uaa.susecapaws.org
 UAA_PORT: 2793

GARDEN_ROOTFS_DRIVER: overlay-xfs
 GARDEN_APPARMOR_PROFILE: ""

services:
 loadbalanced: true

kube:
 storage_class:
 # Change the value to the storage class you use
 persistent: "aws-ebs"
 shared: "gp2"

# The default registry images are fetched from
 registry:
 hostname: "registry.suse.com"
 username: ""
 password: ""
 organization: "cap"

secrets:
 # Create a very strong password for user 'admin'
 CLUSTER_ADMIN_PASSWORD: xxxxx

# Create a very strong password, and protect it because it
 # provides root access to everything
 UAA_ADMIN_CLIENT_SECRET: xxxxx

enable:
 uaa: true

 

8- Add the SUSE Helm repo using the below command:
  • helm repo add suse https://kubernetes-charts.suse.com/
9- Deploy UAA (the authorization and authentication Component of SUSE Cloud Application Platform):
  • helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml
10- Watch the pods (uaa-0 and mysql-0) until they are all successfully up and running.
11- Map the services to the hosted domain created:
  • Run the following command to get the uaa-uaa-public service external IP:
kubectl get services -o wide -n uaa
  • Copy the external IP of the uaa-uaa-public service, navigate to the created hosted zone, then create a record set of type A. Mark it as an alias.

  • Repeat the previous step but let the name be *.uaa

Note: if you changed the name of the domain in the scf-config-values.yaml from uaa then change it in the created A records.

12- Now uaa console is ready https://uaa.susecapaws.org:2793/login
13- Install the SUSE Cloud Application Platform Chart:
  • SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
  • CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  • helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"
14- Watch the pods until they are all successfully up and running.
15- Map the services to the hosted domain created:
  • Run the following command to get the diego-ssh-ssh-proxy-public, router-gorouter-public and tcp-router-tcp-router-public services external-ip:
kubectl get services -n scf | grep elb
  • Copy the external IP of the each service and map it to the correct pattern. Navigate to the created hosted zone then create records set of type A and mark it as alias and paste the services in the target
susecapaws.org router-gorouter-public
*.susecapaws.org router-gorouter-public
tcp.susecapaws.org tcp-router-tcp-router-public
ssh.susecapaws.org diego-ssh-ssh-proxy-public

 

16- Run the following command to update the healthcheck port for the tcp-router-public service:
  • kubectl patch service tcp-router-tcp-router-public --namespace scf \--type strategic \--patch '{"spec": {"ports": [{"name": "healthcheck", "port": 8080}]}}'
17- Run the following command to get the name of the ELB associated to the tcp-router-tcp-router-public:
  • kubectl get service tcp-router-tcp-router-public  -n scf

Take the first part of the load balancer (for example if the load balancer service is a72653c48b2ab11e9a7f20aeea98fb86-2035732797.us-east-1.elb.amazonaws.com then the ELB name will be a72653c48b2ab11e9a7f20aeea98fb86) and run the following command to delete the 8080 port from the ELB:

  • aws elb delete-load-balancer-listeners --load-balancer-name  a72653c48b2ab11e9a7f20aeea98fb86   --load-balancer-ports 8080
Please note that we have only deleted a port listener from the aws load balancer, you may validate that by opening the AWS console and validating then navigate to the EC2 Dashboard and then click on the load balancers, select the load balancer  (a72653c48b2ab11e9a7f20aeea98fb86 ) then open the listener tab and validate that port 8080 is deleted.

18- Now SUSE Cloud Application Platform is deployed so we will deploy the AWS service broker.
19- Create a Dynamodb table which will be used by the AWS service broker:
  •  aws dynamodb create-table       --attribute-definitions                AttributeName=id,AttributeType=S                         AttributeName=userid,AttributeType=S AttributeName=type,AttributeType=S --key-schema AttributeName=id,KeyType=HASH AttributeName=userid,KeyType=RANGE --global-secondary-indexes 'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 --region us-east-1 --table-name awsservicebrokertb

Note: for simplicity purposes, I called the table awsservicebrokertb and used us-east-1 as my AWS region, but you can choose any other name and any other region.

20- Set the name space that will be having the aws service broker:
  • BROKER_NAMESPACE=aws-sb

 Note: Don’t change the namespace as right now it must kept as aws-sb

21- Create the requested IAM roles for the deployment of the service broker:
  • Create a Policy with the name AWS-SB-Provisioner from the IAM AWS console or using AWS console, here is the policy json text:
{ 
 "Version":"2012-10-17",
 "Statement":[ 
 { 
 "Sid":"VisualEditor0",
 "Effect":"Allow",
 "Action":[ 
 "ssm:PutParameter",
 "s3:GetObject",
 "cloudformation:CancelUpdateStack",
 "cloudformation:DescribeStackEvents",
 "cloudformation:CreateStack",
 "cloudformation:DeleteStack",
 "cloudformation:UpdateStack",
 "cloudformation:DescribeStacks"
 ],
 "Resource":[ 
 "arn:aws:s3:::awsservicebroker/templates/*",
 "arn:aws:ssm:REGION_NAME:ACCOUNT_NUMBER:parameter/asb-*",
 "arn:aws:cloudformation:REGION_NAME:ACCOUNT_NUMBER:stack/aws-service-broker-*/*"
 ]
 },
 { 
 "Sid":"VisualEditor1",
 "Effect":"Allow",
 "Action":[ 
 "sns:*",
 "s3:PutAccountPublicAccessBlock",
 "rds:*",
 "s3:*",
 "redshift:*",
 "s3:ListJobs",
 "dynamodb:*",
 "sqs:*",
 "athena:*",
 "iam:*",
 "s3:GetAccountPublicAccessBlock",
 "s3:ListAllMyBuckets",
 "kms:*",
 "route53:*",
 "lambda:*",
 "ec2:*",
 "kinesis:*",
 "s3:CreateJob",
 "s3:HeadBucket",
 "elasticmapreduce:*",
 "elasticache:*"
 ],
 "Resource":"*"
 }
 ]
}

Note: Please replace REGION_NAME and ACCOUNT_NUMBER with the aws region you are using for the cluster and your AWS Account Id.

  • Assign the role to the nodegroup role:

22- Install the service catalog and the Service broker using helm
  • Set up the certification between the installation machine and aws cluster nodes
  • mkdir /tmp/aws-service-broker-certificates && cd $_
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
  • openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \ -passout pass:1234 \  -subj '/CN=aws-servicebroker.'${BROKER_NAMESPACE} -batch \  </dev/null
  • openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
  • openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem

Start installing the service catalog

  • helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
  • helm install svc-cat/catalog --name catalog --namespace catalog
  • watch for the svccat pods to be available
  • install the svcat cli using the following commands
  • curl -sLO https://download.svcat.sh/cli/latest/linux/amd64/svcat
  • chmod +x ./svcat
  • sudo mv ./svcat /usr/local/bin/
  • svcat version --client
  • Now set up the certifications between the AWS service broker and SUSE Cloud Application Platform:
  • mkdir /tmp/aws-service-broker-certificates_CF && cd $_
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
  • openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \ -passout pass:1234 \  -subj ‘/CN=aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local’ -batch \  </dev/null
  • openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
  • openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem

Add the AWS Service Broker repo to Helm repos:

  • helm repo add aws-sb https://awsservicebroker.s3.amazonaws.com/charts
  • To understand the parameters of the chart, you can inspect it using the following command:
  • helm inspect aws-sb/aws-servicebroker
  • Install the AWS service broker:
  • helm install aws-sb/aws-servicebroker      --name aws-servicebroker      --namespace aws-sb       --set aws.secretkey=$aws_access_key      --set aws.accesskeyid=$aws_key_id      --set tls.cert="$(base64 -w0 tls.pem)"      --set tls.key="$(base64 -w0 tls.key)"      --set-string aws.targetaccountid=ACCOUNT_ID --version 1.0.1   --set aws.tablename=awsservicebrokertb       --set aws.vpcid=$vpcid      --set aws.region=REGION_NAME  --set authenticate=false      --wait

Note: Please replace REGION_NAME and ACCOUNT_NUMBER with the aws region you are using for the cluster and your AWS Account Id.

  • Run the following command to get the service broker link, wait for the broker to be in a ready state:
  • svcat get brokers
23- Install the cf-cli on SLES 15 SP1
  • sudo zypper addrepo --refresh https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.0 snappy
  • sudo zypper --gpg-auto-import-keys refresh
  • sudo zypper dup --from snappy
  • sudo zypper install snapd
  • sudo systemctl enable snapd
  • sudo systemctl start snapd
  • reboot then run the following command
  • sudo snap install cf --beta
24- Connect to scf using cf:
25- Connect to SUSE Cloud Application Platform and bind the service broker to it:
  • cf create-service-broker aws-servicebroker dummyusername dummypassword https://aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local
26- Deploy Stratos, the SUSE Cloud Application Platform dashboard and web console:
  • Create a separate storage instance which will host the gathered metrics and logs from the configured platform on stratos and apps (create a retainStorageClass.yaml):
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: retained-aws-ebs-storage
provisioner: kubernetes.io/aws-ebs
parameters:
 type: gp2
#this is the ebs region and zone, use one of the zones of the created worker nodes
 zone: "us-east-1a"
reclaimPolicy: Retain
mountOptions:
 - debug
  • kubectl create -f retainStorageClass.yaml
  • Run kubectl get storageclass and make sure that you have only one default storage class
  • Install Stratos Helm Chart
  • helm install suse/console \    --name susecf-console \    --namespace stratos \    --values scf-config-values.yaml \    --set kube.storage_class.persistent=retained-aws-ebs-storage --set services.loadbalanced=true \ --set console.service.http.nodePort=8080
  • Watch for all Stratos pods to be up and running.
  • Create an A record for the stratos service
  • Run the following command to get the external Address to the loadbalancer service
  • kubectl get service susecf-console-ui-ext --namespace stratos

 

  • Copy the external IP of the each service and map it to Stratos. Navigate to the created hosted zone and create a record set of type A. Mark it as alias and paste the services in the target.

27- Deploy Metrics and link it to Stratos:
  • Create the Prometheus configurations (metrics-config.yaml)
env:
 DOPPLER_PORT: 443
kubernetes:
 authEndpoint: XXXX --> replace this by the Kubernetes API server URL
prometheus:
 kubeStateMetrics:
 enabled: true
nginx:
 username: username
 password: password
useLb: true
  • Apply the configuration: kubectl create -f metrics-config.yaml
  • Install the Helm Chart:
  • helm install suse/metrics \    --name susecf-metrics \    --namespace metrics \    --values scf-config-values.yaml \    --values metrics-config.yaml
  • Wait for all metrics pods to get created.
  • Create an Alias record in the hosted zone for the metrics service
  • Run the following command to get the service external address:
kubectl get service susecf-metrics-metrics-nginx --namespace metrics
  • Copy the external IP of the each service and map it to the metrics. Navigate to the created hosted zone and create a record set of type A. Mark it as alias and paste the services in the target .
  • Navigate to Stratos console login using the cluster password and admin user then click on endpoint tab and click the add (+) button.

  • Click the register button. Enter the user name and password and click Connect.

28- You can connect the Kubernetes cluster in the same way to monitor its metrics as well.

Cloud Native Applications in AWS supporting Hybrid Cloud – Part 1

Wednesday, 31 July, 2019

Let us talk first about what is cloud native and the benefits of SUSE Cloud Application Platform and AWS when building cloud native applications.

So, what is a cloud native app?

Usually people mistake a cloud native app with a 12 factor app and that is not correct. 12 factor is a set of good development best practices which should be in place when developing a distributed application involving many teams/vendors or that requires fast, agile development cycles.

How does that differ from developing a cloud native app? As the name implies, cloud native apps are applications built natively on a cloud platform. What exactly does that mean?

First let us talk about the requirements of a cloud native application:

  1. It needs to meet the target load, with a high SLA.
  2. It must adapt to variable loads. Load is not easily predictable as may happen when the application is available for public usage, similar to YouTube or Facebook.
  3. It may be a pay as you go, so it must be cost efficient.
  4. It must be responsive to changes in requirements and have a fast time to market.
  5. It must enable smooth integration with other apps, services, and APIs, with no restriction to enable digital transformation. Put simply, this is the ability to let different business contexts interact together and learn from each other and be highly responsive. If you want more details, you can read about the interview I did for digital transformation Digital Transformation: An Interview with Rania Mohamed (Global Services)
  6. Application development time must be extremely fast without impacting the quality.
  7. It must be highly integrated with the underlying cloud service, to enable native
  8. It must be portable, even though it is leveraging the underlying cloud service, it must allow the ability to be portable across different clouds.
  9. It supports and enables polyglot development languages and methods.

Now that we understand the CNA (Cloud Native App), let us discuss how is that related to MSA (Microservices Architecture).

MSA is an architecture design principle which mainly emphasizes breaking down the application/problem into smaller chunks which are loosely coupled from each other. Such chunks are called microservices. So simply think of a microservice as the smallest standalone service that can ever live independently by itself. Ok but what are the benefits of MSA and how is it linked to CNA?

MSA main benefits are:
  • Efficiency of scalability as just the required service can be scaled and not the whole application or service or module.
  • Enables a high SLA as now we have a set of microservices that can operate independently without impacting another microservice or app or service if it has downtime.
  • Improves development productivity as it enables and supports agile development.

So is MSA another technical term for CNA?

Simply, no. You may have MSA application which is not CNA, but it is very hard or I would even say impossible to have a CNA which is not designed as an MSA. Because MSA fully supports  CNA principles, it achieves some of the main important principles of CNA such as handling of variable load in a very cost efficient way as well as supporting increased development speed and productivity.

Here is the main question — how is SUSE Cloud Application Platform together with AWS help with CNA?

SUSE Cloud Application Platform is a cloud native application platform that offers application runtimes based on the application properties, defined using a buildpack. It enables the developer to really just focus on doing what he/she is best at, which is writing code. Let me put on the developer cap (which I am always honoured to put on and would never put it down) and explain the steps I go through if I am using SUSE Cloud Application Platform:

  1. I am a Java developer so I just focus on writing my Spring MSA and the APIs I consume and offer.
  2. I need a database so I write down in Maven and in the Spring initialization the required configurations for the supported database. I choose to support PostgreSQL and MySQL.
  3. I then create a manifest file which used by a buildpack and then I do the CF push, to push the application into SUSE Cloud Application Platform on AWS.
  4. Then I can provision the database using the SUSE Cloud Application Platform marketplace, which uses the AWS Service broker, to provision a service instance and then bind it to my application.
  5. Voila, I have my application ready to test in the cloud.

So as a developer, I really didn’t get into understanding the AWS cloud or even how to set up a database. I just focus on writing code and my application needs and the platform takes care of the rest.

Now if my application wants to integrate with AWS SQS, all I need to do is to define such dependency and let the service broker together with SUSE Cloud Application Platform handle the load and the binding of the service instances to my application instance.

So in simple terms, SUSE Cloud Application Platform enables native cloud application development as the it will be able to provision and manage a service instance in the cloud using the underlying cloud native language. How does that happen? The magic words are service broker, using the open service broker API.

By using SUSE Cloud Application Platform and AWS, services can be automatically provisioned based on the load on the app workload. Here is what happens during the development lifecycle of an MSA or an app:

  1. Developer decides on the pre-requisite for his/her app, for example the required services and runtime from the underlying CNA platform, regardless the target cloud platform.
  2. Developer chooses the technologies to be used in developing the application.
  3. He/she develops the app and uses CF push to push the application into SUSE Cloud Application Platform.
  4. SUSE Cloud Application Platform offers a marketplace and service catalog showing all the available cloud services based on the configured service broker(s). Think of the service broker as the link between the cloud(s) used by the platform and SUSE Cloud Application Platform. We can have as many service brokers as we want as long as they support Open Service Broker API standards. SUSE Cloud Application Platform highly enables and supports hybrid cloud native applications.
  5. The developer or the operator uses the CF CLI to provision the required services in AWS using the configured AWS Service broker. They can also use SUSE Cloud Application Platform’s Stratos UI to provision the instance and bind it to the pushed application instance.

The following picture depicts the marketplace embedded in Stratos which displays all services offered by the configured service brokers.

In my next post , I’ll discuss two ways to get SUSE Cloud Application Platform installed on AWS and configure the service broker:

An application a year to an application a week on AWS

Thursday, 20 June, 2019

At the recent SUSECON conference in Nashville, Ryan Niksch from AWS discussed how shifting the focus from writing code to deploying applications to production has become more critical as business agility tops the list of customer requirements. He then introduces the benefits of Cloud Foundry in general, and SUSE Cloud Application Platform specifically, including the AWS service broker; its benefits are that it is a containerized distribution of Cloud Foundry that can very quickly and easily be deployed to AWS using a Quick Start template.

SUSE has posted all recorded talks from SUSECON on YouTube. Check them out if you want to learn more about what SUSE has to offer. We’re not just Linux anymore! I’ll be posting more SUSE Cloud Application Platform talks here over the coming days. Watch Ryan’s talk below:

Visit SUSE at SAPPHIRE NOW 2019 and learn more about SUSE solutions for SAP on AWS

Tuesday, 7 May, 2019

Join SUSE and our partners in Booth #2246 at SAPPHIRE NOW 2019, where you can learn more about SUSE solutions for SAP running on Amazon Web Services (AWS) and more. You can download the full SUSE Session Catalog off the SUSE at SAP SAPPHIRE NOW landing page.

The following SUSE partners are presenting topics related to running SUSE solutions for SAP applications on the AWS Cloud in the SUSE Theater located in our booth (Booth #2246).

  • Tuesday, May 7th at 3:30 PM EDT |  SAP S/4 HANA: Whys and Hows  | Speaker: Darren Mitchell | Director, Managed Services Solutions/Innovations | Itelligence
  • Wednesday, May 8th at 11:00 AM EDT | Optimize Your Digital Transformation Journey – Learn how to Migrate to SAP HANA, or SAP S/4HANA on SUSE into the Cloud | Speaker: Patrick Osterhaus | Chief Technology Officer | Protera Technologies
  • Wednesday, May 8th at 11:30 AM EDT | SAP on AWS: Serving the Need of the Most Demanding SAP Customers | Speaker: Brian Griffin | SAP Cloud Architect | AWS
  • Wednesday, May 8th at 12:00 PM EDT | Accelerate S4 – Deploy a Fully Production-ready SAP S4 Environment in Hours | Speaker: Ben Lingwood | CTO | Lemongrass

 

Be sure to also check out SUSE’s and our partner’s sessions being presented at the AWS Booth Theater (Booth #2000). Full schedule and more, including other AWS sessions, are located on the AWS at SAPPHIRE NOW landing page.

  • Tuesday, May 7th at 11:30 AM EDT | Enhancing High Availability of SAP on AWS with SAP SUSE Cluster Connector | Speaker: Peter Schinagl | Senior Technical Architect | SUSE
  • Wednesday, May 8th at 2:30 PM EDT | L.B. Foster Use Case: SAP S/4HANA Migration to the AWS Cloud with 100% Project Success | Speaker: Patrick Osterhaus | Chief Technology Officer | Protera Technologies
  • Wednesday, May 8th at 4:00 PM EDT | SAP on AWS Enabled Innovation | Speaker: Eamonn O’Neill | Co-Founder and CEO Americas | Lemongrass
  • Thursday, May 9th at 1:00 PM EDT | SAP Subscription-Based Licensing for Life Sciences Firms | Speaker: Brian Everett| Industry Solution Principal, Life Sciences | Itelligence
  • Thursday, May 9th at 3:00 PM EDT | SAP to AWS – Insight from the Middle of a Migration | Speaker: Eamonn O’Neill | Co-Founder and CEO Americas | Lemongrass