Connecting SUSE Manager’s Virtual Host Manager to AWS

Friday, 19 March, 2021

One of the newer features of SUSE Manager is the Virtual Host Manager. This allows the SUSE Manager server to connect to the AWS Cloud and gather information about instances running there. This detail can then be displayed in the SUSE Manager Web UI.

For customers managing their own subscription on AWS, this data can be useful when performing operations such as subscription matching.

In order to configure the VHM and connect to an AWS account, the following steps should be followed:

Firstly, install the required packages.

We need to provide a mechanism to let SUSE Manager connect to AWS, this is provided via the ‘virtual-host-gatherer-libcloud’ package.   This is not installed by default when launching a SUSE Manager instance from the images published in AWS, but once it is registered with the SUSE Customer Center, the latest version of the package is available in the ‘SLE-Module-SUSE-Manager-Server-4.x-Updates’ channel.

Secondly, Connect SUSE Manager to AWS

In the SUSE Manager UI from the Systems > Virtual Host Manager menu, click create and select AWS EC2 from the drop-down menu and fill out the required fields.  It is on this page where the AWS Access ID and Secret Access Key are provided and enable SUSE Manager to gather the instance information.

 

The Least Privilege

One question that gets asked regularly, and the reason for this article, is ‘Which AWS permissions are required for the Virtual Host Manager to function?’

The standard security advice when using AWS is to always grant the least privilege possible for a task to be performed, so using the Access Key for a user with excessive permissions to AWS is not advised.

In order for SUSE Manager to gather the information required from AWS, the VHM needs permission to describe EC2 instances and addresses.  One method to grant this is to create a new IAM user specific to this task, create a policy as below and attach to the user.

 

{
    "Version": "2012-10-17",
    "Statement":[
        {
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeAddresses",
                "ec2:DescribeInstances"
            ],
            "Resource": "*"
        }
    ]
}

 

You can limit permissions further by restricting access to specific regions. Additional detail on creating ‘read-only’ users in AWS can be found at:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html#iam-example-read-only

 

Monitoring Activity

For the very curious, it’s also possible to monitor the AWS operations that the Virtual Host Manager sends to AWS.  The gatherer.log file in the /var/log/rhn/ directory will provide detail of both the requests sent to the EC2 Endpoint from SUSE Manager and the responses back.

2021-03-17 11:11:54 urllib3.connectionpool - DEBUG: https://ec2.eu-west-2.amazonaws.com:443 "GET /?Action=DescribeInstances&Version=2016-11-15 HTTP/1.1" 200 

2021-03-17 11:11:54 urllib3.connectionpool - DEBUG: https://ec2.eu-west-2.amazonaws.com:443 "GET /?Action=DescribeAddresses&Version=2016-11-15 HTTP/1.1" 200

To see this level of output in the gatherer log, the debug level of logging should be temporarily increased.

Finally, A big thank you to Pablo Suárez Hernández from the SUSE Engineering for bringing his SUSE Manager knowledge to this.

Links

Check out the SUSE Manager Client Configuration Guide in the SUSE Documentation site at:

https://documentation.suse.com/#suma

 

SUSE Achieves AWS Outposts Ready Designation

Monday, 23 November, 2020

SUSE today announced it has achieved AWS Outposts Ready designation, as part of the Amazon Web Services (AWS) Service Ready Program. This designation recognizes that SUSE Linux Enterprise Server and SUSE Linux Enterprise Server for SAP Applications have demonstrated successful integration with AWS Outposts deployments. AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience.

Joshua Burgin, General Manager, AWS Outposts, Amazon Web Services, Inc., said, “We are delighted that SUSE Linux Enterprise Server and SUSE Linux Enterprise Server for SAP Applications have been tested and validated on AWS Outposts and we welcome them to the AWS Outposts Service Ready program. With SUSE Linux Enterprise Server and SUSE Linux Enterprise Server for SAP Applications running on AWS Outposts, customers gain a consistent hybrid experience between the AWS Region and their on-premises environment for business-critical workloads.”

Achieving the AWS Outposts Ready designation differentiates SUSE as an AWS Partner with a product fully tested on AWS Outposts. AWS Outposts Ready products are generally available and supported for AWS customers, with clear deployment documentation for AWS Outposts. AWS Service Ready Partners have demonstrated success building products integrated with AWS services, helping AWS customers evaluate and use their technology productively, at scale and varying levels of complexity.

You can learn more about SUSE Linux Enterprise Server and SUSE Linux Enterprise Server for SAP Applications on AWS Outposts on our website. There you will also find more information on technical information, including a reference architecture to help you get started.

SUSE and AWS – 10 Years of Collaboration and Innovation

Saturday, 31 October, 2020

This year marks 10 years since Amazon Web Services (AWS) and SUSE first collaborated to bring SUSE’s open source technologies to AWS customers looking for elastic, scalable and cost-effective cloud solutions. Over the past decade, SUSE has been a leader in driving innovation for Linux solutions on AWS, accumulating an ever-growing list of milestones and achievements along the way. (more…)

Webinar: Supercharge your SAP environment with AWS & SUSE

Thursday, 21 November, 2019

Maximizing SAP Operations with High Availability on AWS with SUSE

REGISTER NOW

Do you want to maximize your SAP environment’s availability and performance? This webinar will cover how AWS and SUSE have collaborated to provide solutions that help you supercharge your SAP environment.

Learn how Amazon Web Services (AWS) and SUSE can help you deploy your SAP workloads from NetWeaver to S/4 HANA on AWS cloud and are helping enterprise customers achieve new levels agility with unique deployment and procurement options.

Join subject matter experts Santosh Choudhary and Michael Bukva as they show you how to supercharge your SAP Operations with High Availability. The webinar will include a live Q&A session.

  • SAP on AWS overview
  • How SUSE works with SAP and AWS
  • SUSE Linux Enterprise Server for SAP Applications
  • SAP HANA on AWS Quick Start

 

Join us at 4pm PT 26th November / 11am AEDT 27th November, for this technical session. Register now to guarantee your spot!

Join SUSE in Booth #4011 at AWS re:Invent, Las Vegas, December 2-6th!

Friday, 15 November, 2019

Plan on attending the upcoming AWS re:Invent in Las Vegas, December 2-6th? Add SUSE to your list and visit us in our booth (#4011) in the Partner Expo of the Sands Conference Center. We will be on hand to showcase how to design and run SAP HANA and S/4HANA for high-availability configurations and scale-out scenarios using SUSE Linux Enterprise Server for SAP Applications on the AWS Cloud.

We can walk you through how to provision, deploy, and configure SAP HANA workloads using the SAP HANA and S/4HANA Quick Start reference deployment guides on AWS. Each Quick Start includes AWS CloudFormation templates that automate the deployment of a production-ready environment and a guide that discusses the suggested reference architecture and provides step-by-step deployment instructions using AWS best practices for security and availability. You may also download a new Technical Solution Brief on the Quick Start guides for more detail. You can learn more anytime about SAP solutions on AWS, including how to register for webinars, read case studies and how to get it on the AWS Marketplace by visiting our SAP on AWS page.

If you are looking to accelerate containerized application delivery and lifecycle management of traditional and cloud-native applications, plan to stop by our booth. We will discuss how to manage Kubernetes applications using SUSE Cloud Application Platform on the Amazon Elastic Kubernetes Service (EKS), a managed Kubernetes infrastructure service available from Amazon Web Services.  We will be showcasing the SUSE Cloud Application Platform Quick Start guide on AWS, which allows customers to automatically deploy the SUSE Cloud Application Platform in an Amazon EKS environment on AWS in about an hour. But you do not have to wait until AWS re:Invent! You can download this Quick Start today along with a Solution Brief and request AWS credits to help you get started on the SUSE Cloud Application Platform on AWS Solution Space page.

See you in Vegas!

How AWS combined with the SUSE open source mindset leads to your success

Thursday, 24 October, 2019

Although this isn’t a fairy tale, I’d still like to start a long time ago.

It’s more than one year since we had a meeting with our partner AWS in the SAP Linux Lab. The purpose of this meeting was to address a request from AWS customers, who want to use Adaptive Server Enterprise (ASE) database replication with all its features in the AWS infrastructure, but without needing additional AWS instances. Is there something what we can use from the open source basket that SUSE has?

So what exactly does this mean?

The SAP ASE database has a built-in HA feature to replicate data between a primary and a companion database, which is actually quite similar to what many of you may know from SAP HANA. This SAP ASE  replication uses the Fault Manager (FM) to monitor the replication between the primary and companion database and execute a take-over if necessary, e.g. a primary database failure. Now because the FM is monitoring the database replication, it must of course run somewhere outside of the database replication nodes to ensure the high-availability. Hence a 3rd node is required.

What we did!

We saw the advantages for our customers and said we were interested to work with AWS on such a solution. AWS started by preparing the infrastructure and then the SAP ASE team were involved and assisted in implementing the SAP ASE databases and the replication. During this project the FM was modified to support the new scenario for this deployment. So two of four parts were already done as we, SUSE continued the game. To make the scenario more relevant to SAP customers, a full SAP NetWeaver system was deployed and connected to the SAP ASE DB replication pair. AWS and SUSE now have a new joint paper for a SAP NetWeaver HA solution using SAP ASE:
https://documentation.suse.com/sbp/all/single-html/SAP_HA740_SetupGuide_AWS/

We used this Best Practice and set up a ASCS / ERS cluster. Now I mentioned above that the FM monitors the ASE DB replication, but who monitors the FM itself? What happens if the FM fails? Well, if the FM isn’t available then there is no monitoring and no replication take-over. So what if we could use the existing cluster for ASCS and ERS and implement the FM into that solution too? This turned out to be a great idea since this not only fulfilled the requirement to install the FM outside of the ASE DB hosts. It also meant the SUSE HA cluster could monitor the FM. This making the FM itself HA.
As it turns out this way was all quite easy to do. The flexibility of pacemaker and our sap-suse-cluster-connector gave us the option to add the FM as a new service. The FM now runs and is monitored in a similar way as when we made ASCS and ERS processes highly-available. When the FM first fails, the saphostagent tries to restart the failed FM (sybdbfm) process, but after some retries it gives up.

With the cluster implementation we close this gap and pacemaker takes care and does everything which is necessary to get the FM instance up and running again.

We have documented this advantage as an additional chapter in the AWS SAP NetWeaver HA Guide:
https://documentation.suse.com/sbp/all/single-html/SAP_HA740_SetupGuide_AWS/#_additional_implementation_scenarios

Done, four of four parts 🙂

So in summary your key take-aways should be:
– SUSE HA to protection your ASE DB replication
– joint cooperation between AWS, SAP and SUSE
– opensource product with flexibility and enterprise support
– you request, we adapt, you succeed

This is Manas (AWS), Burkhard and Wajeehuddin (SAP ASE team) and Bernd (SUSE) from the SAP LinuxLab.

Join SUSE, AWS, SAP and Lemongrass in Sydney for a half-day workshop

Tuesday, 15 October, 2019

SUSE, SAP, AWS and Lemongrass are joining forces to bring a free half-day SAP HANA migration workshop to Sydney, Australia!

REGISTER HERE

Any company that has invested in an SAP infrastructure knows its success is critical to business operations. The move to SAP HANA and SAP S/4HANA delivers the real-time operations with reduced complexity that you need for the digital economy, but this means a transformation of your SAP infrastructure. It’s made all the more complicated if you’re also considering a move to the public cloud.

We want to make that simpler.

SUSE, SAP, AWS and Lemongrass are joining forces in Sydney Australia to bring you a half-day SAP HANA migration workshop: SAP+3 Sydney. This workshop offers face to face information and demonstrations from four industry leaders, and is aimed at SAP customers and any others considering a move to SAP S/4HANA.

In a half-day of presentations, interactive discussions and Q&A, you’ll hear from industry experts and gain information that will prove invaluable as you chart a course and execute your SAP S/4HANA plan.

Topics:

  • SAP’s strategy for the most recent HANA 2.0 release and Linux-based infrastructures
  • The latest cloud deployment options for SAP HANA on the AWS cloud
  • Open source software solutions for your SAP workloads that are cloud-centric

 

So if you’re in Sydney and would like a chance to hear from experts about SAP HANA and cloud migration, register now to join us next week, Monday October 21st, for this unique workshop event.

REGISTER HERE

We’ll see you soon!

How Wipro Modernizes Application Delivery for the Retail Industry with SUSE and AWS

Thursday, 29 August, 2019

Wipro case study

We recently published an interesting case study about how Wipro solved an application delivery challenge for their customers using SUSE Cloud Application Platform and Amazon EKS.

WiPro’s Fortune 500 retail industry customers wanted to modernize their application development and delivery processes but were challenged by legacy infrastructure, slow application development cycles, and inability to customize customer experiences at scale. WiPro worked with SUSE and AWS to create a new platform for retail customers that is more scalable, agile, and seamless to operate. Time to deployment reduced by up to 40% and overall deployments increased by 30%. The overall application-release pipeline accelerated by up to 40% through automation. Reduced costs by improving resource utilization-per-service by nearly 30%.

Wipro can now offer digital transformation services that leverage SUSE Cloud Application Platform. Its customers will be in a position to realize greater value from their cloud environment when combining scalable, modular, and highly resilient applications leveraging microservices and containers. This allows them to deliver applications faster and with greater customization, without straining any additional IT resources.

Read the full case study here.

Webinar: Enhance your SAP environment with SUSE running on AWS

Thursday, 1 August, 2019

SUSE and AWS present…

There are always new things to learn about how SUSE works with our partners. Just a few weeks ago, the SUSE and AWS teams in APAC came together to record a brand new webinar focused on how you can achieve high availability and performance with SUSE Linux Enterprise Server for SAP Applications on AWS.

The agility and flexibility offered by public cloud is a fast-growing platform to build, host, and scale the SAP landscape powered by SAP HANA. SUSE can help you achieve near zero downtime and sustain high-performance levels, while AWS delivers a broad and deep set of cloud services that are certified to fulfill the compute, memory, and storage requirements of SAP HANA.

The session covered: 

  • SUSE, AWS and SAP’s history of collaboration
  • SAP HANA-powered landscapes with SUSE Linux Enterprise Server for SAP Applications on AWS
  • How to get started with SUSE resources for SAP and AWS Quick Starts

 

A recording of the full webinar is available on-demand here. And for more information about SUSE on AWS for SAP applications, visit our website.

 

Cloud Native Applications in AWS supporting Hybrid Cloud – Part 2

Wednesday, 31 July, 2019

In my previous post , I wrote about using SUSE Cloud Application Platform on AWS for cloud native application delivery. In this follow-up, I’ll discuss two ways to get SUSE Cloud Application Platform installed on AWS and configure the service broker:

In this blog I am focusing on installing SUSE CAP on AWS using helm charts.

Installing SUSE Cloud Application Platform on AWS:

Note: For all Yaml scripts, please use :set paste in the vi editor and make sure to remove the extra lines and spaces.

Note: for the commands, whenever you got a permission error, please use sudo. And whenever you get error please remove the \ which resembles a newline extra in the command.

1- From your machine install eksctl and AWS CLI:
First install AWS cli:
  • pip3 install awscli --upgrade –user
  • pip install --upgrade pip
  • cp ls ./.local/bin/aws /sbin
Configure AWS:
  • aws configure

Log in to the AWS Console and get the access credentials from IAM service and the AWS region used as well as the output format. It is recommended to do it as JSON.

Install eksctl:
  • curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
  • sudo mv /tmp/eksctl /usr/local/bin

Note: Make sure the timezone and the date/time is setup correctly. If needed, run the below command to fix the time:

  • sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
Install Kubectl:
  • curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
  • chmod +x ./kubectl
  • sudo mv ./kubectl /usr/local/bin/kubectl
2- Create a cluster.

You can have as many workers as you need. SUSE Cloud Application Platform runs on Kubernetes as PODs and the recommendation is to have three worker nodes each in different AZ to support High Availability deployment. For the purposes of this test, we will do it using two worker nodes. The minimum as per SUSE Cloud Application Platform documentation is t2.large with a minimum of 100 GB volume.

The below figure resembles one of the solution recommended for running SUSE Cloud Application Platform in HA mode in AWS.

  • eksctl create cluster --name suse-cap --nodegroup-name suse-cap-node-group --node-type t2.large --node-volume-size 100 --nodes 2 --nodes-min 1 --nodes-max 4 --node-ami auto

Once the cluster is created, you can see that a cloud formation template is created which you can manage in the future to change the number of worker nodes (min and max) as well as the node types.

You many need to configure kubectl if you have already created the cluster from the console or another machine using the below command:

  • aws eks --region XXXX update-kubeconfig --name XXXXX
3- Install Tiller on the AWS EKS

This is because the SUSE Cloud Application Platform deployment is done using Helm Charts, so helm server (tiller) must be first installed

  • cat <<EoF > ~/tiller-rbac.yaml
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    EoF
  • kubectl apply -f ~/tiller-rbac.yaml
  • curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
  • chmod +x get_helm.sh
  • ./get_helm.sh
  • helm init --service-account tiller --upgrade
4- Create the domain which will be used by SUSE Cloud Application Platform using AWS Route53:
  • Login to the AWS Console and select Route53 service. Navigate to Register Domain and select a domain. We will name it in our example susecapaws.org. wait until the domain registration is done to continue with the next steps. Once it is registered you can see the domain in the hosted zones.
5- Create AWS S3 storage for SUSE Cloud Application Platform using the following yaml file (Aws-ebs.yaml):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: aws-ebs
 annotations:
 storageclass.kubernetes.io/is-default-class: "true"
 labels:
 kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
 type: gp2
allowVolumeExpansion: true

then run:

  • kubectl create -f Aws-ebs.yaml
6- Navigate to the EC2 services under the region you used for the creation of the cluster and edit any of the security groups assigned to the work nodes EC2 instances by adding the following ports in the inbound:

Once this is done, then all defined ports are enabled/opened on all cluster worker nodes.

7- Create the SUSE Cloud Application Platform configuration file. It should look like the following (scf-config-values.yaml):
env:
 DOMAIN: susecapaws.org
 UAA_HOST: uaa.susecapaws.org
 UAA_PORT: 2793

GARDEN_ROOTFS_DRIVER: overlay-xfs
 GARDEN_APPARMOR_PROFILE: ""

services:
 loadbalanced: true

kube:
 storage_class:
 # Change the value to the storage class you use
 persistent: "aws-ebs"
 shared: "gp2"

# The default registry images are fetched from
 registry:
 hostname: "registry.suse.com"
 username: ""
 password: ""
 organization: "cap"

secrets:
 # Create a very strong password for user 'admin'
 CLUSTER_ADMIN_PASSWORD: xxxxx

# Create a very strong password, and protect it because it
 # provides root access to everything
 UAA_ADMIN_CLIENT_SECRET: xxxxx

enable:
 uaa: true

 

8- Add the SUSE Helm repo using the below command:
  • helm repo add suse https://kubernetes-charts.suse.com/
9- Deploy UAA (the authorization and authentication Component of SUSE Cloud Application Platform):
  • helm install suse/uaa \--name susecf-uaa \--namespace uaa \--values scf-config-values.yaml
10- Watch the pods (uaa-0 and mysql-0) until they are all successfully up and running.
11- Map the services to the hosted domain created:
  • Run the following command to get the uaa-uaa-public service external IP:
kubectl get services -o wide -n uaa
  • Copy the external IP of the uaa-uaa-public service, navigate to the created hosted zone, then create a record set of type A. Mark it as an alias.

  • Repeat the previous step but let the name be *.uaa

Note: if you changed the name of the domain in the scf-config-values.yaml from uaa then change it in the created A records.

12- Now uaa console is ready https://uaa.susecapaws.org:2793/login
13- Install the SUSE Cloud Application Platform Chart:
  • SECRET=$(kubectl get pods --namespace uaa \--output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
  • CA_CERT="$(kubectl get secret $SECRET --namespace uaa \--output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  • helm install suse/cf \--name susecf-scf \--namespace scf \--values scf-config-values.yaml \--set "secrets.UAA_CA_CERT=${CA_CERT}"
14- Watch the pods until they are all successfully up and running.
15- Map the services to the hosted domain created:
  • Run the following command to get the diego-ssh-ssh-proxy-public, router-gorouter-public and tcp-router-tcp-router-public services external-ip:
kubectl get services -n scf | grep elb
  • Copy the external IP of the each service and map it to the correct pattern. Navigate to the created hosted zone then create records set of type A and mark it as alias and paste the services in the target
susecapaws.org router-gorouter-public
*.susecapaws.org router-gorouter-public
tcp.susecapaws.org tcp-router-tcp-router-public
ssh.susecapaws.org diego-ssh-ssh-proxy-public

 

16- Run the following command to update the healthcheck port for the tcp-router-public service:
  • kubectl patch service tcp-router-tcp-router-public --namespace scf \--type strategic \--patch '{"spec": {"ports": [{"name": "healthcheck", "port": 8080}]}}'
17- Run the following command to get the name of the ELB associated to the tcp-router-tcp-router-public:
  • kubectl get service tcp-router-tcp-router-public  -n scf

Take the first part of the load balancer (for example if the load balancer service is a72653c48b2ab11e9a7f20aeea98fb86-2035732797.us-east-1.elb.amazonaws.com then the ELB name will be a72653c48b2ab11e9a7f20aeea98fb86) and run the following command to delete the 8080 port from the ELB:

  • aws elb delete-load-balancer-listeners --load-balancer-name  a72653c48b2ab11e9a7f20aeea98fb86   --load-balancer-ports 8080
Please note that we have only deleted a port listener from the aws load balancer, you may validate that by opening the AWS console and validating then navigate to the EC2 Dashboard and then click on the load balancers, select the load balancer  (a72653c48b2ab11e9a7f20aeea98fb86 ) then open the listener tab and validate that port 8080 is deleted.

18- Now SUSE Cloud Application Platform is deployed so we will deploy the AWS service broker.
19- Create a Dynamodb table which will be used by the AWS service broker:
  •  aws dynamodb create-table       --attribute-definitions                AttributeName=id,AttributeType=S                         AttributeName=userid,AttributeType=S AttributeName=type,AttributeType=S --key-schema AttributeName=id,KeyType=HASH AttributeName=userid,KeyType=RANGE --global-secondary-indexes 'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 --region us-east-1 --table-name awsservicebrokertb

Note: for simplicity purposes, I called the table awsservicebrokertb and used us-east-1 as my AWS region, but you can choose any other name and any other region.

20- Set the name space that will be having the aws service broker:
  • BROKER_NAMESPACE=aws-sb

 Note: Don’t change the namespace as right now it must kept as aws-sb

21- Create the requested IAM roles for the deployment of the service broker:
  • Create a Policy with the name AWS-SB-Provisioner from the IAM AWS console or using AWS console, here is the policy json text:
{ 
 "Version":"2012-10-17",
 "Statement":[ 
 { 
 "Sid":"VisualEditor0",
 "Effect":"Allow",
 "Action":[ 
 "ssm:PutParameter",
 "s3:GetObject",
 "cloudformation:CancelUpdateStack",
 "cloudformation:DescribeStackEvents",
 "cloudformation:CreateStack",
 "cloudformation:DeleteStack",
 "cloudformation:UpdateStack",
 "cloudformation:DescribeStacks"
 ],
 "Resource":[ 
 "arn:aws:s3:::awsservicebroker/templates/*",
 "arn:aws:ssm:REGION_NAME:ACCOUNT_NUMBER:parameter/asb-*",
 "arn:aws:cloudformation:REGION_NAME:ACCOUNT_NUMBER:stack/aws-service-broker-*/*"
 ]
 },
 { 
 "Sid":"VisualEditor1",
 "Effect":"Allow",
 "Action":[ 
 "sns:*",
 "s3:PutAccountPublicAccessBlock",
 "rds:*",
 "s3:*",
 "redshift:*",
 "s3:ListJobs",
 "dynamodb:*",
 "sqs:*",
 "athena:*",
 "iam:*",
 "s3:GetAccountPublicAccessBlock",
 "s3:ListAllMyBuckets",
 "kms:*",
 "route53:*",
 "lambda:*",
 "ec2:*",
 "kinesis:*",
 "s3:CreateJob",
 "s3:HeadBucket",
 "elasticmapreduce:*",
 "elasticache:*"
 ],
 "Resource":"*"
 }
 ]
}

Note: Please replace REGION_NAME and ACCOUNT_NUMBER with the aws region you are using for the cluster and your AWS Account Id.

  • Assign the role to the nodegroup role:

22- Install the service catalog and the Service broker using helm
  • Set up the certification between the installation machine and aws cluster nodes
  • mkdir /tmp/aws-service-broker-certificates && cd $_
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
  • openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \ -passout pass:1234 \  -subj '/CN=aws-servicebroker.'${BROKER_NAMESPACE} -batch \  </dev/null
  • openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
  • openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem

Start installing the service catalog

  • helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com
  • helm install svc-cat/catalog --name catalog --namespace catalog
  • watch for the svccat pods to be available
  • install the svcat cli using the following commands
  • curl -sLO https://download.svcat.sh/cli/latest/linux/amd64/svcat
  • chmod +x ./svcat
  • sudo mv ./svcat /usr/local/bin/
  • svcat version --client
  • Now set up the certifications between the AWS service broker and SUSE Cloud Application Platform:
  • mkdir /tmp/aws-service-broker-certificates_CF && cd $_
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
  • kubectl get secret --namespace scf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
  • openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \ -passout pass:1234 \  -subj ‘/CN=aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local’ -batch \  </dev/null
  • openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
  • openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem

Add the AWS Service Broker repo to Helm repos:

  • helm repo add aws-sb https://awsservicebroker.s3.amazonaws.com/charts
  • To understand the parameters of the chart, you can inspect it using the following command:
  • helm inspect aws-sb/aws-servicebroker
  • Install the AWS service broker:
  • helm install aws-sb/aws-servicebroker      --name aws-servicebroker      --namespace aws-sb       --set aws.secretkey=$aws_access_key      --set aws.accesskeyid=$aws_key_id      --set tls.cert="$(base64 -w0 tls.pem)"      --set tls.key="$(base64 -w0 tls.key)"      --set-string aws.targetaccountid=ACCOUNT_ID --version 1.0.1   --set aws.tablename=awsservicebrokertb       --set aws.vpcid=$vpcid      --set aws.region=REGION_NAME  --set authenticate=false      --wait

Note: Please replace REGION_NAME and ACCOUNT_NUMBER with the aws region you are using for the cluster and your AWS Account Id.

  • Run the following command to get the service broker link, wait for the broker to be in a ready state:
  • svcat get brokers
23- Install the cf-cli on SLES 15 SP1
  • sudo zypper addrepo --refresh https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.0 snappy
  • sudo zypper --gpg-auto-import-keys refresh
  • sudo zypper dup --from snappy
  • sudo zypper install snapd
  • sudo systemctl enable snapd
  • sudo systemctl start snapd
  • reboot then run the following command
  • sudo snap install cf --beta
24- Connect to scf using cf:
25- Connect to SUSE Cloud Application Platform and bind the service broker to it:
  • cf create-service-broker aws-servicebroker dummyusername dummypassword https://aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local
26- Deploy Stratos, the SUSE Cloud Application Platform dashboard and web console:
  • Create a separate storage instance which will host the gathered metrics and logs from the configured platform on stratos and apps (create a retainStorageClass.yaml):
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: retained-aws-ebs-storage
provisioner: kubernetes.io/aws-ebs
parameters:
 type: gp2
#this is the ebs region and zone, use one of the zones of the created worker nodes
 zone: "us-east-1a"
reclaimPolicy: Retain
mountOptions:
 - debug
  • kubectl create -f retainStorageClass.yaml
  • Run kubectl get storageclass and make sure that you have only one default storage class
  • Install Stratos Helm Chart
  • helm install suse/console \    --name susecf-console \    --namespace stratos \    --values scf-config-values.yaml \    --set kube.storage_class.persistent=retained-aws-ebs-storage --set services.loadbalanced=true \ --set console.service.http.nodePort=8080
  • Watch for all Stratos pods to be up and running.
  • Create an A record for the stratos service
  • Run the following command to get the external Address to the loadbalancer service
  • kubectl get service susecf-console-ui-ext --namespace stratos

 

  • Copy the external IP of the each service and map it to Stratos. Navigate to the created hosted zone and create a record set of type A. Mark it as alias and paste the services in the target.

27- Deploy Metrics and link it to Stratos:
  • Create the Prometheus configurations (metrics-config.yaml)
env:
 DOPPLER_PORT: 443
kubernetes:
 authEndpoint: XXXX --> replace this by the Kubernetes API server URL
prometheus:
 kubeStateMetrics:
 enabled: true
nginx:
 username: username
 password: password
useLb: true
  • Apply the configuration: kubectl create -f metrics-config.yaml
  • Install the Helm Chart:
  • helm install suse/metrics \    --name susecf-metrics \    --namespace metrics \    --values scf-config-values.yaml \    --values metrics-config.yaml
  • Wait for all metrics pods to get created.
  • Create an Alias record in the hosted zone for the metrics service
  • Run the following command to get the service external address:
kubectl get service susecf-metrics-metrics-nginx --namespace metrics
  • Copy the external IP of the each service and map it to the metrics. Navigate to the created hosted zone and create a record set of type A. Mark it as alias and paste the services in the target .
  • Navigate to Stratos console login using the cluster password and admin user then click on endpoint tab and click the add (+) button.

  • Click the register button. Enter the user name and password and click Connect.

28- You can connect the Kubernetes cluster in the same way to monitor its metrics as well.