Share with friends and colleagues on social media

This how to will describe how to get proper certificates in your SUSE Caas Platform envirnoment using Let’s Encrypt certificates and Ingress controller. At the final stage a SUSE Cloud Application Platform deployment will be done.

The purpose of the how to is to share how proper certificates can be used in your testing or development environments, some shortcuts have been taken. It is not recommended for a production environment. Many components deployed (cert-manager, k9s etc) are not provided by or supported by SUSE.

Prerequisites

  • a working SUSE CaaS Platform environment
  • a domain name – this guide will be using suse.ninja, modify all references to YOUR domain
  • a wildcard domain name pointing to one of the nodes in your cluster, in this guide *.cap1.suse.ninja
  • a Cloudflare account to manage the public DNS for the domain name
  • your Cloudflare Global API Key (go to your Cloudflare profile > API Tokens)
  • a working nfs server for storage claims (or an existing working StorageClass)
  • the files used in this how to can be downloaded from: GitHub

A handy tool to have installed is k9s, to track created pods, read logs and so on.

Note: Ensure you have a Cloudflare account set up with domain delegation for your domain. We use Cloudflare to provide DNS-01 challenge response for LetsEncrypt. You absolutely could use HTTP-01 as well if you want, but this would require exposing 80/443 to the public internet for your domain (out of scope for this article). This method allows us to get signed certificates for domains we own but for clusters that are offline/completely behind the firewall .

The environment used

> skuba cluster status
NAME OS-IMAGE KERNEL-VERSION KUBELET-VERSION CONTAINER-RUNTIME HAS-UPDATES HAS-DISRUPTIVE-UPDATES
caasmaster1.mynet SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default v1.16.2 cri-o://1.16.1 <none> <none>
caasworker1.mynet SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default v1.16.2 cri-o://1.16.1 <none> <none>
caasworker2.mynet SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default v1.16.2 cri-o://1.16.1 <none> <none>
caasworker3.mynet SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default v1.16.2 cri-o://1.16.1 <none> <none>
 
 
> kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
caasmaster1.mynet Ready master 81m v1.16.2 10.1.1.69 <none> SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default cri-o://1.16.1
caasworker1.mynet Ready <none> 53m v1.16.2 10.1.1.72 <none> SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default cri-o://1.16.1
caasworker2.mynet Ready <none> 53m v1.16.2 10.1.1.73 <none> SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default cri-o://1.16.1
caasworker3.mynet Ready <none> 52m v1.16.2 10.1.1.74 <none> SUSE Linux Enterprise Server 15 SP1 4.12.14-197.34-default cri-o://1.16.1
 
 
> kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-2rpz9 1/1 Running 0 51m
kube-system cilium-b5rvb 1/1 Running 0 50m
kube-system cilium-operator-97cfc4756-xwqww 1/1 Running 0 78m
kube-system cilium-s46jq 1/1 Running 0 50m
kube-system cilium-wgnbj 1/1 Running 0 78m
kube-system coredns-88dfb894c-pfxng 1/1 Running 0 78m
kube-system coredns-88dfb894c-pj6jw 1/1 Running 0 78m
kube-system etcd-caasmaster1.mynet 1/1 Running 0 77m
kube-system kube-apiserver-caasmaster1.mynet 1/1 Running 0 77m
kube-system kube-controller-manager-caasmaster1.mynet 1/1 Running 0 77m
kube-system kube-proxy-5zsbz 1/1 Running 0 51m
kube-system kube-proxy-95h6s 1/1 Running 0 78m
kube-system kube-proxy-tdwpw 1/1 Running 0 50m
kube-system kube-proxy-x2vgj 1/1 Running 0 50m
kube-system kube-scheduler-caasmaster1.mynet 1/1 Running 0 77m
kube-system kured-2x5pn 1/1 Running 0 77m
kube-system kured-dgn8s 1/1 Running 0 49m
kube-system kured-pp8qm 1/1 Running 0 48m
kube-system kured-zkc6p 1/1 Running 0 49m
kube-system oidc-dex-799996b768-5rjbp 1/1 Running 0 78m
kube-system oidc-dex-799996b768-wn5qq 1/1 Running 0 78m
kube-system oidc-dex-799996b768-xm584 1/1 Running 0 78m
kube-system oidc-gangway-5f7496c7df-4c2tl 1/1 Running 0 78m
kube-system oidc-gangway-5f7496c7df-564d7 1/1 Running 0 78m
kube-system oidc-gangway-5f7496c7df-xxg6g 1/1 Running 0 78m

Screenshoot from k9s

Step-by-step guide

Enable helm/tiller

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init \
--tiller-image registry.suse.com/caasp/v4/helm-tiller:2.16.1 \
--service-account tiller

Add SUSE repo

helm repo add suse https://kubernetes-charts.suse.com

Setup a default storage class

helm install /
--name nfs-client-provisioner stable/nfs-client-provisioner /
--namespace nfs-client-provisioner /
--values nfs-config-values.yaml

Deploy the ingress controller

Edit the nginx-ingress-config-values.yaml file, and ensure the external_ips are changed to be the masters + workers of your cluster. We use external_ip to expose the ingress controller on port 443. Port 80 is disabled by default with this configuration. This will not work on a cloud provider, where you must change the config-values file to be a LoadBalancer.

helm install --name nginx-ingress suse/nginx-ingress --namespace nginx-ingress --values nginx-ingress-config-values.yaml

Check you can access the ingress default backend by going to https://<masterip>. You should get a 404 error after accepting a self signed certificate. If you don’t get this far, then stop now and seek assistance as something has gone wrong and everything beyond this point is useless.

Cert manager

cert manager is a native Kubernetes certificate management controller which allows for automation of certificate provisioning, challenges (to prove you own a domain), and renewal. It adds a bunch of CustomResourceDefinitions to extend the Kubernetes API to provide additional API endpoints. If you think how traditionally your object type was a Pod or a Deployment, it can now be a CertificateRequest or CertificateOrder. We deploy cert-manager in a specific configuration and could spend hours talking about why we do it that way and how it all works, but ultimately as long as you’ve added a domain to Cloudflare and have your API credentials then this how to will work. cert-manager can also work with local CAs and self-signed certs as well and has many other integrations with corporate CA providers and other DNS providers. Teaching cert-manager concepts is out of scope for this article.
Start by adding the CRDs:
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/v0.13.1/deploy/manifests/00-crds.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install --name cert-manager /
--namespace cert-manager /
--version v0.13.1 jetstack/cert-manager /
--set ingressShim.defaultIssuerName=letsencrypt-production /
--set ingressShim.defaultIssuerKind=ClusterIssuer /
--set ingressShim.defaultIssuerGroup=cert-manager.io --set 'extraArgs={--dns01-recursive-nameservers=8.8.8.8:53\,1.1.1.1:53}'
Get your Cloudflare Global API key and turn it into a k8s secret:
echo -n 'cloudflare global API key' | base64

Put the generated key inside cloudflare.yaml in the designated space then apply it:

kubectl apply -f cloudflare.yaml

Edit your letsencrypt.yaml, and ensure the domains at the bottom of this file reflect the domains you wish to use for your Ingress Controller and have been added into your Cloudflare. This file will create a ClusterIssuer (allowing cert-manager to get certs for domains not scoped to a namespace). If you have an ingress rule with kubecf.com but that isn’t in the letsencrypt.yaml file then the cert will stay pending forever as there will be no Issuer matching the target domain in your ingress rule.

kubectl apply -f letsencrypt.yaml

If you get a warning when applying letsencrypt.yaml around a mutatingwebhook, you need to wait a couple minutes more and try again. This is because the pod for cert-manager will be generating some public/private keys and this takes a bit of time. The yaml will not apply successfully until all the cert-manager pods are ready.

Deploy stratos

Edit the stratos-values.yaml file to change the password and change the domain name to match your domain name in the ingress section

helm install suse/console --name stratos-console --namespace stratos --values stratos-values.yaml

Open a webbrowser and head to: https://stratos.cap1.suse.ninja, replacing cap1.suse.ninja with your domain.
Be proud that you get no SSL warning and that you’re are using a real certificate!

NOTE: It may take up to 10 minutes to validate you own the domain name and issue the certificate through Let’s Encrypt. During this time, Stratos will be behind a self signed cert. You can check the status of the certificate challenge by going to the Kubernetes dashboard > Custom Resource Definitions and browse through the Certificate, Order, CertificateRequests and Challange API objects. They all update in real time and you can view the status of each by viewing the raw yaml in there (it will say something like “pending domain validation of X”. If you see something like “Unable to find a resolver that matches DOMAINNAME”, then you’ve not updated letsencrypt.yaml with the correct domain name you want to use.

Deploy monitoring

Create a authfile using htpasswd command or use https://hostingcanada.org/htpasswd-generator/
Use Apache specific salted MD5.

Create the Kubernetes namespace and add the secret:

kubectl create ns monitoring 
kubectl create secret generic -n monitoring prometheus-basic-auth --from-file=authfile

On a master node:

cd /etc/kubernetes
sudo kubectl --kubeconfig=admin.conf -n monitoring create secret generic etcd-certs /
--from-file=/etc/kubernetes/pki/etcd/ca.crt /
--from-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt /
--from-file=/etc/kubernetes/pki/etcd/healthcheck-client.key

Switch back to your management node and install the Prometheus helm chart:

helm install --name prometheus suse/prometheus --namespace monitoring --values prometheus-config-values.yaml

Look into:
https://prometheus-alertmanager.cap1.suse.ninja/
https://prometheus-server.cap1.suse.ninja/

NOTE: use login/password that was generated from htpasswd and replace cap1.suse.ninja with your domain name

kubectl create -f prometheus-map.yaml
kubectl apply -f grafana-datasources.yaml
helm install --name grafana suse/grafana \
--namespace monitoring \
--values grafana-config-values.yaml
kubectl apply -f https://raw.githubusercontent.com/SUSE/caasp-monitoring/master/grafana-dashboards-caasp-cluster.yaml
kubectl apply -f https://raw.githubusercontent.com/SUSE/caasp-monitoring/master/grafana-dashboards-caasp-etcd-cluster.yaml
kubectl apply -f https://raw.githubusercontent.com/SUSE/caasp-monitoring/master/grafana-dashboards-caasp-namespaces.yaml
kubectl apply -f https://raw.githubusercontent.com/SUSE/caasp-monitoring/master/grafana-dashboards-caasp-nodes.yaml
kubectl apply -f https://raw.githubusercontent.com/SUSE/caasp-monitoring/master/grafana-dashboards-caasp-pods.yaml

Look into:
https://grafana.cap1.suse.ninja/
NOTE: use admin for the username with the same password generated from htpasswd and replace cap1.suse.ninja with your domain name

Something like this should show up

Deploy SUSE Cloud Application Platform

helm repo add suse https://kubernetes-charts.suse.com/

Edit the scf-config-values file with the correct domain name, and enable Eirini if you want to as well.

helm install suse/cf --name susecf-scf --namespace scf --values scf-values.yaml

And wait until all pods are deployed

Wait until all the pods are ready

Now you can run

cf login -a https://api.cap1.suse.ninja -u admin

NOTE: Now because you have a valid certificate you can skip the –skip-ssl-validation

 

You now should have a working setup with valid certificates recognized by most browsers. Enjoy!

 

Many thanks to SUSE’s CTO of Enterprise Cloud Products Rob de Canha-Knight for his endless patience and support teaching me Kubernetes and related technologies.

Share with friends and colleagues on social media
(Visited 1 times, 1 visits today)

Category: Containers, Containers as a Service, DevOps, Digital Transformation, Kubernetes, Software-defined Infrastructure, SUSE CaaS Platform, SUSE Cloud Application Platform, Technical Solutions
This entry was posted Tuesday, 7 April, 2020 at 1:15 pm
You can follow any responses to this entry via RSS.

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet