SUSE Virtualization – Traffic Split With Kubernetes Gateway API

Share
Share

SUSE Virtualization is a cloud-native hyperconverged infrastructure platform solution optimized for running virtual machine and container workloads in the data center, multi-cloud and edge environments.

Design and build with modern stack and use cases in mind, this article showcases the cohesive integration between the SUSE Virtualization and the Kubernetes API. It demonstrates how to utilize the Kubernetes Gateway API to implement traffic split strategy among virtual machines.

The Gateway API is an official project by the Kubernetes SIG-Network community built to improve and standardize service networking in Kubernetes. It provides API to enable multiple traffic gateway controllers running on the same cluster, to support advance L4 and L7 capabilities for both North-South (ingress) and East-West (mesh) traffic.

About Traffic Split

As service applications evolve and become more sophisticated to satisfy the needs of their users, deliberate and calculated release strategies like canary deployment and blue/green rollouts can be used to reduce the chances of outage during upgrade.

Canary deployment allows new version of the service to be made available to a small subset of the users, while keeping the old version online to serve the rest of the user base. Using weighted route configuration, traffic can be distributed to different versions of the service. Once the new version is deemed to operate correctly, its traffic is gradually increased until eventually, all user requests are directed to the new version. The old version can then be taken offline. If problems arise during the transition, rollbacks are performed to route user traffic back to the old version.

The following scenario shows how to use the Gateway API HTTPRoute resource to associate weighted routes to two different backends.

Show different components involved in a traffic split setup like the load balancer, router and backend services.

Environment Setup

To perform the steps in this section, the kubectland helm CLI must be added to your shell path.

Download and install Harvester 1.6.

Once Harvester is ready, retrieve its kubeconfig file.

Gateway Configuration

Install Gateway API v1.3.0 and Envoy Gateway v1.5.4:

helm -n envoy-gateway-system install envoy-gateway oci://docker.io/envoyproxy/gateway-helm \
   --version v1.5.4 \
   --create-namespace

Wait for the gateway to be ready:

kubectl -n envoy-gateway-system wait --timeout=5m deployment/envoy-gateway --for=condition=Available
deployment.apps/envoy-gateway condition met

Create the cluster-scoped GatewayClass resource for the Envoy Gateway:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: envoy-gateway
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
EOF

Create a demo namespace with a Gateway resource:

kubectl create ns demo

cat <<EOF | kubectl -n demo apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: envoy-gateway
spec:
  gatewayClassName: envoy-gateway
  listeners:
  - name: http
    protocol: HTTP
    port: 80
EOF

This gateway is initialized to listen at port 80 for demonstration purposes only. For production environment, ensure the gateway is TLS-enabled with a listener at port 443.

Confirm that the gateway is ready:

kubectl -n demo get gateway envoy-gateway -ocustom-columns="\
NAME:.metadata.name,\
PROGRAMMED:.status.listeners[0].conditions[0].status,\
ACCEPTED:.status.listeners[0].conditions[1].status,\
RESOLVED:.status.listeners[0].conditions[2].status"
NAME          PROGRAMMED ACCEPTED RESOLVED
envoy-gateway True       True     True

Service Configuration

This section describes the steps to create version 1 (v1) and version 2 (v2) of a sample HTTP service backed by Nginx.

Use the Harvester UI to upload the Ubuntu Noble image to the demo namespace:

  • Namespace: demo
  • Name: ubuntu-24.04-minimal-cloudimg-amd64.img
  • URL: https://cloud-images.ubuntu.com/minimal/releases/noble/release-20251001/ubuntu-24.04-minimal-cloudimg-amd64.img
  • Storage class: harvester-longhorn

Create a new Nginx virtual machine named http-v1 with the following properties:

  • Namespace: demo
  • Name: http-v1
  • CPU: 2
  • Memory: 4
  • Volume image: ubuntu-24.04-minimal-cloudimg-amd64.img
  • Instance labels:
    • Key: service.version
    • Value: v1
  • User data to install Nginx (under Advanced Options):
#cloud-config
package_update: true
packages:
  - qemu-guest-agent
  - nginx
runcmd:
  - - systemctl
    - enable
    - --now
    - qemu-guest-agent.service
  - - systemctl
    - enable
    - --now
    - nginx.service

Once the http-v1 VM is ready, expose it via the following backend-v1 service:

cat <<EOF | kubectl -n demo apply -f -
apiVersion: v1
kind: Service
metadata:
  name: backend-v1
spec:
  selector:
    service.version: v1
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
EOF

The selector of the service must match the instance label of the virtual machine instance defined above.

Once the backend-v1 service is ready, use a curl pod to validate the connection:

cat <<EOF | kubectl -n demo create -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl
spec:
  containers:
  - name: curl
    image: alpine/curl
    command: ["sleep", "infinity"]
EOF
kubectl -n demo exec curl -- curl backend-v1

Expect the service to respond with the Nginx’s default welcome page.

Repeat the steps above to set up the v2 virtual machine, using service.version/v2 as the virtual machine’s instance label.

Deploy the following service to proxy traffic to the v2 virtual machine:

cat <<EOF | kubectl -n demo apply -f -
apiVersion: v1
kind: Service
metadata:
  name: backend-v2
spec:
  selector:
    service.version: v2
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
EOF

Just like with v1, use the following command to confirm that the v2 service is ready:

kubectl -n demo exec curl -- curl backend-v2

Canary Route Configuration

Deploy the following HTTPRoute resource to set up the backend.example.com canary route:

cat <<EOF | kubectl -n demo apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend-demo
spec:
  parentRefs:
  - name: envoy-gateway
  hostnames:
  - backend.example.com
  rules:
  - backendRefs:
    - name: backend-v1
      port: 80
      weight: 1
    - name: backend-v2
      port: 80
      weight: 0
    matches:
    - path:
        type: PathPrefix
        value: /
EOF

The initial weight route configuration ensures 100% of the incoming traffic is routed to the http-v1 service.

Confirm that the route is ready:

kubectl -n demo get httproute backend-demo -ocustom-columns="\
NAME:.metadata.name,\
ACCEPTED:.status.parents[0].conditions[0].status,\
RESOLVED:.status.parents[0].conditions[1].status"
NAME           ACCEPTED   RESOLVED
backend-demo   True       True

Use the curl pod to verify that the route is reachable via Envoy:

ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=demo,gateway.envoyproxy.io/owning-gateway-name=envoy-gateway -o jsonpath='{.items[0].metadata.name}')

kubectl -n demo exec curl -- curl -s -H "Host: backend.example.com" ${ENVOY_SERVICE}.envoy-gateway-system

Expect the service to respond with the Nginx’s default welcome page.

Traffic Split With Canary Routing

To validate the correctness of the traffic split setup, install and launch tcpdump on the http-v1 and http-v2 virtual machines:

sudo apt install tcpdump

Sniff for incoming HTTP traffic at port 80:

sudo tcpdump -i enp1s0 -s 0 -A 'tcp port 80'

The network sniffer remains quiet until the curl pod starts sending traffic to the canary endpoint:

kubectl -n demo exec curl -- /bin/sh -c 'while true ; do curl -H "Host: backend.example.com" ${ENVOY_SERVICE}.envoy-gateway-system; sleep 3s; done'

Since the HTTPRoute resource is configured to direct all incoming traffic to the backend-v1 service, only the tcpdump running on the http-v1 virtual machine shows traces of incoming traffic.

Patch the HTTPRoute configuration to send 50% of the traffic to the backend-v2 service:

k -n demo patch httproute backend-demo --type="json" -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/weight","value":5},{"op":"replace","path":"/spec/rules/0/backendRefs/1/weight","value":5}]'

Expect the tcpdump sniffer in the http-v2 virtual machine to pick up traces of HTTP packets.

Finally, shift all the incoming traffic entirely to the backend-v2 service by changing its corresponding weight to 1:

kubectl -n demo patch httproute backend-demo --type="json" -p='[{"op":"replace","path":"/spec/rules/0/backendRefs/0/weight","value":0},{"op":"replace","path":"/spec/rules/0/backendRefs/1/weight","value":1}]'

As expected, the http-v1 virtual machine no longer receives anymore traffic. All traffic heads to the http-v2 virtual machine.

Conclusion

This article demonstrates how to use the Kubernetes Gateway API to implement canary deployment among virtual machines. Harvester provides a seamless API abstraction to simplify the management of virtual machines. Combining it with the Kubernetes Gateway API, cluster administrators can implement advance release and rollback strategies to reduce the chances of outage during upgrade.

Taking it a step further, the reader can implement more advance routing scenarios involving header matching and cross namespace routing among virtual machines using Harvester and the Gateway API, to ensure more resilient routing setups and service availability.

To remove all the demo resources, run:

kubectl delete ns demo
Share
(Visited 1 times, 1 visits today)
Avatar photo
21 views
Ivan Sim Principal Software Engineer on SUSE Virtualization. Contributor to Kubernetes and Kubernetes CSI.