SUSE Support

Here When You Need Us

How to edit the upstream nameservers used by CoreDNS, in a Rancher Kubernetes Engine (RKE) or Rancher v2.x provisioned Kubernetes cluster

This document (000020122) is provided subject to the disclaimer at the end of this document.

Situation

Task

By default, CoreDNS pods will inherit the nameserver configuration from the node. In certain circumstances, it might be desired to override this and use a specific set of nameservers for external queries.

Note: These steps update the nameservers only for Pods that use either the ClusterFirst (default) or ClusterFirstWithHostNet DNS policy. Nameserver configuration for nodes and other Pods will not be affected.

Pre-requisites

  • A Kubernetes cluster provisioned by the Rancher Kubernetes Engine (RKE) CLI or Rancher v2.x, with the CoreDNS addon enabled.

Note: New clusters can also be created using the same steps.

Steps

Option A: Update the cluster.yaml

The cluster configuration YAML provides the upstreamnameservers option, to configure a list of upstream nameservers, per the example below:

  1. Add the upstreamnameservers option, with the list of nameservers, to the cluster configuration YAML. For RKE provisioned clusters, add this into the cluster.yml file. For a Rancher provisioned cluster, navigate to Cluster Management in the Rancher UI, and Edit Config of the cluster, click Edit as YAML.

    dns:
      provider: coredns
      upstreamnameservers:
      - 1.1.1.1
      - 8.8.8.8
  2. Update the cluster with the new configuration. For RKE provisioned clusters, invoke rke up --cluster.yml (ensure the cluster.rkestate file is present in the working directory when invoking rke up). For Rancher provisioned clusters, click Save in the Rancher UI Edit as YAML view.

Note: This option is recommended as it requires minimal change, see the RKE add-ons documentation for more information.

Option B: Update the kubelet resolv.conf

By default, the kubelet will refer to the /etc/resolv.conf file as the source for nameserver configuration.

It is possible to override this by adding an extra_args option to the kubelet service, and this is also accomplished in the cluster configuration YAML.

A custom resolv.conf file can then be used by the kubelet instead, per the example below:

  1. On each of the nodes in the cluster create the custom nameserver configuration file with a nameserver IP address:

    echo "nameserver 8.8.8.8" > /etc/k8s-resolv.conf
  2. Add the resolv-conf flag to the extra_args option for the kubelet service, referencing the custom nameserver configuration file into the cluster configuration YAML. For RKE provisioned clusters, add this into the cluster.yml file. For a Rancher provisioned cluster, navigate to Cluster Management in the Rancher UI, and Edit Config of the cluster, click Edit as YAML.

    services:
      kubelet:
        extra_args:
          resolv-conf: /host/etc/k8s-resolv.conf
  3. Update the cluster with the new configuration. For RKE provisioned clusters, invoke rke up --cluster.yml (ensure the cluster.rkestate file is present in the working directory when invoking rke up). For Rancher provisioned clusters, click Save in the Rancher UI Edit as YAML view.

See the RKE services documentation for more information.

Note: kubelet flags are being updated, as such a restart of the kubelet component will occur on each node.

Option C: Update the node resolv.conf

If the nameserver configuration should be consistent between the OS and Kubernetes pods, updating the node /etc/resolv.conf file is recommended.

This could be because nameservers are changing or that the caching configuration (for example systemd-resolved) is not desired.

Changes to a systemd managed resolv.conf can be dependent on the Linux distribution and you should refer to the documentation for the distribution used in the cluster.

Note: The kubelet component caches the /etc/resolv.conf file at start time, as such a restart of the kubelet component needs to occur on each node manually, after updating the /etc/resolv.conf file.

This can be accomplished a number of ways:

  • docker restart kubelet on each node
  • A drain and restart of each node
  • Replacing nodes in the cluster with the updated configuration

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000020122
  • Creation Date: 06-May-2021
  • Modified Date:26-Jun-2024
    • SUSE Rancher

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

tick icon

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

tick icon

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

tick icon

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.