Ingress Controllers: Getting In and Getting It Right | SUSE Communities

Ingress Controllers: Getting In and Getting It Right

Share
Share

The SUSE CaaS Platform team recently released a new feature that is available to all subscribers using version 3: an ingress controller. It is a valuable part of Kubernetes networking, especially as clusters grow in terms of node count and especially application count.

From Outside In

There are several ways to configure external network access to services – or, alternatively, to block it. Services that only communicate with other services inside the cluster don’t need external access configured – in fact, they explicitly should be configured without external access. By declaring a service to be of type ClusterIP, you declare it to be reachable only from within the cluster.

apiVersion: v1
kind: Service
metadata:  
  name: my-internal-service
spec:
  selector:    
    app: my-app
  type: ClusterIP
  ports:  
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP

(Actually, that’s not entirely accurate: external access can be configured using the Kubernetes proxy. But this requires running kubectl as an authenticated user, this really should only be used as a temporary tool for debugging in non-production environments that are not Internet-accessible.)

The easiest way to make services accessible from outside the cluster is to define them as type NodePort. This defines the service as accessible on a specific port on every node of the cluster.

apiVersion: v1
kind: Service
metadata:  
  name: my-nodeport-service
spec:
  selector:    
    app: my-app
  type: NodePort
  ports:  
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30036
    protocol: TCP

In this example, the service will listen on port 30036 of every node of the cluster. (The port selected must be unused and in the range 30000-32767; if you omit the nodePort selection, Kubernetes will choose for you.)  This service is the only one that can use 30036 in this cluster – every service needs its own port number. In a large complex cluster, this can be a significant limiting factor.

If you are working in a public cloud that offers load balancers, or you have a load balancer in your on-premise network in front of the cluster with a supported controller, you can configure your service with type LoadBalancer. The load balancer will allocate an IP address per service. This can get expensive in a public cloud, and can be severely limited by your external IP address allocation in an on-premise deployment.

What is an Ingress Controller?

A much more flexible and powerful approach is the use of an ingress controller. In Kubernetes, an ingress controller is a separate resource, not a service type. It sits in front of multiple services, and enables and controls access to them.

Here’s an example of an ingress controller service declaration.

apiVersion: v1
kind: Service
metadata:
  name: my-ingress-controller
  namespace: default
  labels:
    k8s-app: my-ingress-controller
spec:
  type: LoadBalancer
  ports:
  - port: 80
    nodePort: 30021
    name: http
  - port: 443
    nodePort: 30022
    name: https
  selector:
    k8s-app: my-ingress-controller

Note that the service identifies itself as type LoadBalancer. This is typical, since it allows the use of multiple ports, but if you want to restrict it to one port, you can use type NodePort. (If you do, you will have to choose one of HTTP or HTTPS to serve, not both.)

Behind an ingress controller, the individual services can be of type ClusterIP – since the outside network talks to the ingress controller, not the services themselves.

More than Just a Bridge

Of course, the ingress controller can connect external IP addresses and ports to internal services. But that’s not its only capability. Features offered by ingress controllers can include:

  • Fanout Multiplexing (Path-Based Routing): unlike the other approaches that require one IP address and/or port per service, ingress controllers can use the same IP address and port for multiple services. It can then use the URL to distinguish between services. For example, on a game server cluster, these could all use the same IP address and port:
    https://www.mygameserver.com/baseball
    https://www.mygameserver.com/basketball
    https://www.mygameserver.com/hockey
  • Name-based Virtual Hosting (Host-Based Routing): The same approach can be applied to totally different domains sharing the same address. Using the above services, I could also expose them as subdomains:
    https://baseball.mygameserver.com
    https://basketball.mygameserver.com
    https://hockey.mygameserver.com
  • TLS termination: You can set up secure encrypted access via Transport Layer Security (TLS)  by specifying a Kubernetes secret that contains a private key and certificate. The certificate should contain a CN for each FQDN (or wildcard) being served. (Note that SUSE CaaS Platform also uses TLS between services and components of the cluster, and supplies its own internal certificates.)
  • Load Balancing. Basic load balancing properties such as load balancing algorithm and service weights are implemented in many ingress controllers. More advanced capabilities require a hardware or software load balancer.

Ingress controllers add flexibility and security to Kubernetes environments, and remove potential IP address and namespace obstacles to scaling.

Experience the Benefits

If you’re a subscribed user of SUSE CaaS Platform 3, you can download the ingress controller from the registry and try it yourself. If you aren’t, start a 60-day free trial.

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

No comments yet

Avatar photo
6,492 views