Mastering Your SLES Estate: A Beginner’s Guide to Ansible on SLES 16

Share
Share

In the world of enterprise Linux, consistency and efficiency are essential. As your infrastructure expands, managing each server manually becomes unfeasible. This is where automation comes in, and for administrators of SUSE Linux Enterprise Server (SLES), there is a powerful, integrated, and fully supported solution: Ansible.

This blog post is your comprehensive starting point for a journey to explore SLES automation on Google Cloud. We will walk through, step by step, how to set up a complete Ansible environment on SLES 16, configure it using modern best practices, and run powerful automations that will save you time and streamline your daily operations.
Whether you’re a seasoned SLES administrator new to automation or a developer looking to manage your own infrastructure, by the end of this post, you will be ready to start managing your server fleet effectively.

What is SLES automation with Ansible?

SLES automation with Ansible is a powerful approach to managing your SLES infrastructure as code. By using Ansible’s simple, human-readable playbooks, you can automate repetitive tasks like patching, configuration, and software deployment across hundreds of servers simultaneously.

Why Ansible on SLES 16?

SUSE has fully integrated Ansible into its management and automation strategy. When you use Ansible on SLES 16, you’re not just using a popular open source tool; you’re leveraging a technology that is deeply embedded and supported within the SUSE ecosystem. Ansible is available directly from the SLES repositories, ensuring a smooth and reliable installation. This powerful combination allows you to codify your infrastructure, turning manual, repetitive tasks into dependable, repeatable, and well-documented automation playbooks. From patching and configuration management to complex application deployments, Ansible on SLES provides a solid foundation for building a fully automated enterprise.

How do SLES System Roles simplify your automation?

One of the biggest benefits of automating on SLES is the inclusion of SLES System Roles. These are pre-packaged, SUSE-supported Ansible roles specifically designed and optimized to manage common SLES services and configurations. Think of them as expert-level automation blueprints, ready for you to use. Instead of figuring out all the individual steps to configure a service like Cockpit or set up a security policy with AIDE, you can simply call the official System Role. This not only saves a lot of time but also ensures that your configurations follow SUSE’s recommended best practices, providing a secure, stable, and supportable automated environment from day one.

Prerequisites: What do you need to start?

Before we dive into Ansible, let’s ensure our workshop setup is ready. A clean and properly configured environment is vital for a smooth learning experience.

Google Cloud infrastructure

To follow this post, you should already have a basic Google Cloud setup. This generally includes:

  • Google Cloud Project: A specific GCP project with billing activated.
  • APIs Enabled: The Compute Engine API is activated in your project.
  • Networking: A VPC network and firewall rules that enable SSH access to your virtual machines.

Foundational skills

  • Linux Fundamentals: A solid understanding of the Linux command line is essential.
  • Google Cloud SDK: Ensure the gcloud CLI is installed and that you are authenticated with your project.

Step 1: Setting up our lab environment

Every good experiment needs a lab. We will set up two SLES 16 virtual machines in GCP as follows:

  1. Ansible control node (control-node): Acts as our orchestration server.
  2. Managed node (node1): Serves as a target for our automation.

NOTE

  • The following commands use placeholder values for project ID, zone, and service account. Please replace these with the correct values for your Google Cloud environment.
  • Always verify that you use the latest SLES 16 image for your project. Use the SUSE PINT tool to list SLES 16 images on Google Cloud or simply check the newest SLES 16 version in the Google Cloud Console.

Ansible control node (control-node)

Create the Ansible control node Google Compute Engine (GCE) instance:

gcloud compute instances create control-node \
  --project=<PROJECTNAME>\
  --zone=<REGIONNAME>\
  --machine-type=e2-medium\
  --network-interface=network-tier=PREMIUM,stack-type=IPV4_ONLY,subnet=default\
  --maintenance-policy=MIGRATE\
  --provisioning-model=STANDARD\
  --service-account=<SERVICEACCOUNT>@developer.gserviceaccount.com\
  --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/trace.append \ --create-disk=auto-delete=yes,boot=yes,device-name=control-node,image=projects/suse-cloud/global/images/sles-16-0-v20251112-x86-64,mode=rw,size=30,type=pd-balanced \
  --no-shielded-secure-boot\
  --shielded-vtpm\
  --shielded-integrity-monitoring\
  --labels=goog-ec-src=vm_add-gcloud\
  --reservation-affinity=any

Update the system

# Connect to the new Ansible control node via SSH first
sudo zypper refresh
sudo zypper -n patch
sudo reboot

The managed node (node1)

Create node1 GCE instance:

gcloud compute instances create node1 \
  --project=<PROJECT NAME> \
  --zone=<REGION NAME> \
  --machine-type=e2-medium \
  --network-interface=network-tier=PREMIUM,stack-type=IPV4_ONLY,subnet=default --maintenance-policy=MIGRATE \
  --provisioning-model=STANDARD \
  --service-account=<SERVICE ACCOUNT>@developer.gserviceaccount.com \
  --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/trace.append \
  --create-disk=auto-delete=yes,boot=yes,device-name=node1,image=projects/suse-cloud/global/images/sles-16-0-v20251112-x86-64,mode=rw,size=30,type=pd-balanced \
  --no-shielded-secure-boot \
  --shielded-vtpm \
  --shielded-integrity-monitoring \
  --labels=goog-ec-src=vm_add-gcloud \
  --reservation-affinity=any

Step 2: Installing and configuring Ansible

Now, let’s set up our central automation hub: the control-node GCE instance.

Install Ansible and System Roles

Install the Ansible packages by executing the following command, which installs both the Ansible Core engine and the powerful SLES System Roles.

sudo zypper -n in ansible ansible-linux-system-roles

Verify the installation. The following command shows the version of Ansible running on the control node:

ansible --version

List the available Ansible roles in the Ansible control node:

ls /usr/share/ansible/collections/ansible_collections/suse/linux_system_roles/roles/ 

Configure secure and passwordless connectivity

Ansible communicates over SSH. To enable seamless and secure automation, we will configure SSH key-based authentication between the Ansible control node and node1.

Generate a Secure SSH Key. We will use the modern ed25519 algorithm:

ssh-keygen -t ed25519 -f ~/.ssh/ansible -q -N ""

Copy the Ansible control node public key to node1. The  ssh-copy-id command is an ideal tool for this task:

ssh-copy-id -i ~/.ssh/ansible.pub ab@node1

Test the Connection between the Ansible control node and node1. You should see node1’s hostname printed to your console without being prompted for a password:

ssh -i ~/.ssh/ansible node1 hostname

Create the Ansible inventory

The inventory is the core of Ansible’s understanding of your infrastructure. We will create an inventory.yml file to specify our managed nodes.

In the project directory, create the inventory.yml file as follows:

# This is the top-level group that contains all other groups and hosts
all:
  # This section defines global variables that apply to all hosts in the file
  vars:
    # Sets the default user for SSH connections
    ansible_user: <YOUR USERNAME>
    # Specifies the path to the Python interpreter on the remote hosts
    ansible_python_interpreter: /usr/bin/python3
    # Specifies the path to the SSH private key used for authentication
    ansible_ssh_private_key_file: ~/.ssh/ansible

  # This section defines groups of hosts
  children:
    # This is the 'demo_servers' group
    demo_servers:
      # This section lists the members of the 'demo_servers' group
      hosts:
        # The first host in the group
        node1:

NOTE
Replace the Ansible username with the one corresponding to your environment.

Verify Ansible Connectivity by sending your first command to Ansible! The ping module is the classic way to confirm everything is working correctly:

ansible demo_servers -i inventory.yml -m ping

Seeing that green SUCCESS message is a great sign! It means your control node is ready to manage your infrastructure.

Step 3: Our first automation playbook

It’s time to put Ansible to work.  In your project directory, create a demo_files directory to store your playbook files, as follows:

mkdir demo_files

Ensuring essential packages are installed

Our first playbook is straightforward but practical: make sure essential tools and packages are installed on node1.

Create demo_files/install_basic.yml playbook with the following content:

---
# Name of the playbook
- name: Ensure essential packages are present
  # Target hosts for this play
  hosts: demo_servers
  # Execute tasks with elevated privileges
  become: true
  # List of tasks to be executed
  tasks:
    # Name of the task
    - name: Install curl, vim, and w3m
      # Module to use for package management
      ansible.builtin.package:
        # Name of the packages to install
        name:
          - curl
          - vim          

          - w3m
        # Ensure the package is installed
        state: present

Run the Playbook as follows:

ansible-playbook -i inventory.yml demo_files/install_basic.yml

TIP
Notice how Ansible reports if the packages already exist or ‘ok’ or ‘changed’ if it had to install them. This concept, known as idempotency, is a fundamental principle of reliable automation.

Deploying Cockpit with SLES System Role

Let’s use a SLES System Role to deploy the Cockpit web console. This playbook will install the software, start the service, and correctly configure the system firewall.

Create demo_files/deploy_cockpit.yml playbook file with the following content:

---
# Name of the playbook
- name: Install Cockpit and automatically open firewall port
  # Target hosts for this play
  hosts: demo_servers
  # Execute tasks with elevated privileges
  become: true
  # Variables for the Cockpit role
  vars:
    # Ensure the Cockpit service is enabled
    cockpit_enabled: true
    # Ensure the Cockpit service is started
    cockpit_started: true
    # Allow the role to manage firewall rules
    cockpit_manage_firewall: true
  # List of tasks to be executed
  tasks:
    # Name of the task
    - name: Dynamically install Cockpit and open firewall
      # Include the Cockpit system role
      ansible.builtin.include_role:
        # Name of the role to include
        name: suse.linux_system_roles.cockpit

Run the playbook as follows:

ansible-playbook -i inventory.yml demo_files/deploy_cockpit.yml

Once finished, open a web browser and go to `https://<NODE1_IP_ADDRESS> to view the Cockpit interface, fully installed and set up in just a few minutes. This showcases the power of role-based automation!

How can you start automating today? Your journey starts now!

Today, you’ve done more than just run a few commands. You’ve laid the foundation for a scalable, efficient, and repeatable way to manage your SUSE Linux Enterprise Server infrastructure. You’ve set up an Ansible control node, built a portable inventory, and performed both simple and advanced automation using a SUSE-supported System Role.

The power to automate is now in your hands. Explore the other SLES System Roles. Begin considering the repetitive, manual tasks you do every day and how you could convert them into straightforward Ansible playbooks. Welcome to the world of Infrastructure as Code.

For further reading, please refer to the following guides from SUSE:

Share
(Visited 1 times, 1 visits today)
Avatar photo
38 views
Abdelrahman Mohamed An advocate for the new SUSE solutions by acting as a public speaker at SUSE conferences, delivering partners workshops, contributing to the SUSE best practices series, and contributing regularly to the SUSE official blogs. As the Global Solutions Architect for the Google Alliance at SUSE, I help SUSE to Invent and enhance SUSE solutions operating on Google Cloud. I contribute to introducing the digital transformation guidance to simplify, modernize, and accelerate the efforts to go to market.