CentOS Alternatives: Automating With Ansible - Part 2 | SUSE Communities

CentOS Alternatives: Migrating Workloads From CentOS To OpenSUSE Leap – Automating With Ansible Part 2

Share
Share

In this blog posts, we’ll dive into adapting your Ansible code made for CentOS to openSUSE Leap, ensuring seamless compatibility. In this first part, we provided advice and a general introduction to ease your way into the process. In this second part, we’ll delve into practical examples of troubleshooting and adapting existing roles.

Together, we’ll navigate the challenges and rediscover the freedom of choice by making your playbooks OS-agnostic.

 

Setup openSUSE Leap as the Ansible Control node

Before we dive into playbook adaptation, let’s ensure Ansible is set up on your openSUSE Leap system.

We need a place to run our playbooks and some target systems.

For this blog I used a CentOS managed node and two openSUSE Leap, one as managed node and another one as control node:

myopensuse15 mycentos9 myopensuse
OS OpenSUSE Leap 15.5 Centos Stream 9 OpenSUSE Leap 15.5
Hostname myopensuse15.mydemo.lab mycentos9.mydemo.lab myopensuse.mydemo.lab
Ansible role Managed node Managed node Control node
Existing services HTTP/S 8080/8443 HTTP/S 8080/8443

 

Directory structure for this exercise:

  • /var/tmp/git: will contain the public GIT repositories we clone.

  • /var/tmp/ansible: will contain the Ansible playbooks, roles, inventory and configuration.

Notes:

  • The Control node has never run Ansible before, so there isn’t any existing configuration other than the default that comes with the Ansible package.
  • All the commands, unless stated otherwise, are to be run from the control node.
  • This setup is not intended for production, the main purpose of this post is to help understand and adapt existing playbooks to work with openSUSE.
  • There is no need to install Ansible on any of the managed nodes.

 

The first step is to install Ansible on the control node, to do so, please execute the following command:

zypper install ansible

Let’s proceed to setup the Ansible environment, please run the following commands:

  • Create the directories:

    mkdir /var/tmp/{git,ansible} ; mkdir /var/tmp/ansible/roles;

     

  • Create the Ansible configuration file:

    vim /var/tmp/ansible/ansible.cfg

    that will contain the following:

    [defaults]
    inventory = ./inventory

     

  • Create the initial inventory file:

    vim /var/tmp/ansible/inventory

    with the following content:

    [proxyservers]
    myopensuse.mydemo.lab
    mycentos9.mydemo.lab
    
    [mysqlservers]
    myopensuse15.mydemo.lab
    mycentos9.mydemo.lab
    

     

    Two things to notice here, I am using the INI format, but other formats such as YAML can be used, you can easily transform the inventory format to YAML by using this command:

    ansible-inventory -i inventory -y --list > inventory.yml

     

    I am using the CentOS managed host in both inventories, this is just to have a baseline to compare against.

 

We will use SSH to communicate with the managed nodes, here is an example of how to setup your Control node to connect seamlessly to the managed nodes:

  1. Create an SSH key, you will be asked for a password and a location, for the location you can use the default.

    ssh-keygen -t ed25519

     

  2. Start a ssh agent

    eval `ssh-agent`

     

  3. Add the key to your ssh agent, if you used a password for creating your key you will be asked here:

    ssh-add

     

  4. Copy the key to your managed nodes, you will be asked for the managed node password

    for managed_node in myopensuse15.mydemo.lab mycentos9.mydemo.lab; do ssh-copy-id ${managed_node}; done

     

You will need to repeat steps 2 and 3 every time you logout of the shell to avoid having to type the password in case you choose to specify one for your key, please remember this setup is not intended for production.

 

First example: Install a load balancer.

In this example we are going to use an Ansible role to install and configure HAproxy to act as load balancer for two existing web servers. We will adapt an existing Ansible role hosted in RHtconsulting GitHub project for this purpose and a playbook of our creation.

Details
Original Ansible role used: https://github.com/rhtconsulting/ansible-role_haproxy.git
Web servers (hostname/port) myopensuse15.mydemo.lab/8443, mycentos9.mydemo.lab/8443
Hostname for the proxy webserver.mydemo.lab (resolves to an IP available in myopensuse.mydemo.lab)

 

First, we will clone the repository containing the role and link it to our Ansible roles folder:

cd /var/tmp/git ; git clone https://github.com/rhtconsulting/ansible-role_haproxy.git
cd /var/tmp/ansible/; ln -s /var/tmp/git/ansible-role_haproxy roles/haproxy

 

After that we will create a new playbook that will make use of the new role we just cloned:

vim install_haproxy.yml

with the following content:

- name: Install HAProxy
  hosts: proxyservers
  roles:
  - role: haproxy
    haproxy_applications:
    - name: web_server
      domain: webserver.mydemo.lab
      expose_https: True
      redirect_http_to_https: True
      servers:
      - name: mycentos9.mydemo.lab
        address: mycentos9.mydemo.lab
        port_https: 8443
      - name: myopensuse15.mydemo.lab
        address: myopensuse15.mydemo.lab
        port_https: 8443

 

Now lets run the playbook to see if it works:

ansible-playbook install_haproxy.yml

This command will return an error on our opensuse server:

...
TASK [haproxy : HAProxy | Install | Start and Enable Service] ***************************************************************************************************************
ok: [mycentos9.mydemo.lab]
fatal: [myopensuse.mydemo.lab]: FAILED! => {"changed": false, "msg": "Unable to start service haproxy: Job for haproxy.service failed because the control process exited with error code.\nSee \"systemctl status haproxy.service\" and \"journalctl -xeu haproxy.service\" for details.\n"}
...
PLAY RECAP ******************************************************************************************************************************************************************
mycentos9.mydemo.lab : ok=10 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
myopensuse.mydemo.lab : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Let’s investigate, if we look at the logs by executing “journalctl -xeu haproxy.service” we can see the following:

...
░░ The job identifier is 3158.
Jul 28 09:16:19 myopensuse.mydemo.lab haproxy[7865]: [ALERT] (7865) : Could not open configuration file /etc/haproxy/haproxy.cfg : Permission denied
Jul 28 09:16:19 myopensuse.mydemo.lab systemd[1]: haproxy.service: Control process exited, code=exited, status=1/FAILURE
...

It seems HAproxy can’t open the configuration file, let’s look at the file permissions:

myopensuse:/var/tmp/ansible # ls -lh /etc/haproxy/haproxy.cfg
-rw-r----- 1 haproxy haproxy 3.4K Jul 28 08:34 /etc/haproxy/haproxy.cfg

Now let’s see what has changed since the original configuration file was deployed by the application RPM:

myopensuse:/var/tmp/ansible # rpm -Va haproxy
S.5..U.T. c /etc/haproxy/haproxy.cfg

 

We can see the file mode is intact, but the user ownership differs, let’s restore it with this command, (we will ignore the output):

rpm --setugids haproxy

Let’s check the file again:

myopensuse:/var/tmp/ansible # ls -lh /etc/haproxy/haproxy.cfg
-rw-r----- 1 root haproxy 3.4K Jul 28 08:34 /etc/haproxy/haproxy.cfg

 

Now we can see the file was originally owned by root, it may not look like a problem but because openSUSE is protecting HAproxy with AppArmor this change in ownership conflicts with the security policy.

Let’s look at the role we downloaded:

cat roles/haproxy/tasks/configure.yml
...
- name: HAProxy | Configure | Update haproxy.cfg
  template:
    src: templates/haproxy.cfg.j2
    dest: /etc/haproxy/haproxy.cfg
    owner: haproxy
    group: haproxy
...

 

We can see it specifies the owner and group of the file, this is fine but limits the compatibility with other OS because they may use a different user and group. Normally the main configuration files will be deployed by the installation package so there is no need to specify the owner and group.
To solve this let’s modify the role to leave the user and group untouched:

...
- template:
    src: templates/haproxy.cfg.j2
    dest: /etc/haproxy/haproxy.cfg
    # owner: haproxy
    # group: haproxy
...

and let’s run it again, this time we can see it succeeded in both CentOS and openSUSE:

myopensuse:/var/tmp/ansible # ansible-playbook install_haproxy.yml
...
PLAY RECAP ******************************************************************************************************************************************************************
mycentos9.mydemo.lab : ok=10 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
myopensuse.mydemo.lab : ok=10 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

 

Now let’s test it:

myopensuse:/var/tmp/ansible # curl -k https://webserver.mydemo.lab/
Hello from myopensuse15!
myopensuse:/var/tmp/ansible # curl -k https://mycentos9.mydemo.lab:8443/
Hello from mycentos9!
myopensuse:/var/tmp/ansible # curl -k https://myopensuse15.mydemo.lab:8443/
Hello from myopensuse15!

 

We can see the load balancer and both web servers work.

 

Second example: Install a MySQL database.

In this example we are going to use an Ansible role to install and configure a MariaDB database, a fork of MySQL. We will use an Ansible role hosted in the CentOS GitHub project for this purpose and a playbook of our creation.

Details
Original Ansible role used: https://github.com/CentOS/ansible-role-mysql
Managed host myopensuse15.mydemo.lab, mycentos9.mydemo.lab

 

First, we will clone the repository containing the role and link it to our Ansible roles folder:

cd /var/tmp/git ; git clone https://github.com/CentOS/ansible-role-mysql
cd /var/tmp/ansible/; ln -s /var/tmp/git/ansible-role-mysql roles/mysql

 

After that we will create a new playbook that will call this role:

vim install_mariadb.yml

with the following content:

- name: Install mariadb
  hosts: mysqlservers
  roles:
  - role: mysql

 

Now lets run the playbook to see if it works:

ansible-playbook install_mariadb.yml

 

The execution will fail on both CentOS and openSUSE:

...
TASK [mysql : Importing specific distro variables] **************************************************************************************************************************
fatal: [myopensuse15.mydemo.lab]: FAILED! => {"msg": "No file was found when using first_found. Use errors='ignore' to allow this task to be skipped if no files are found"}
ok: [mycentos9.mydemo.lab] => (item=/var/tmp/git/ansible-role-mysql/vars/CentOS-9.yml)
...
TASK [Ensuring backup user and jobs] ****************************************************************************************************************************************
ERROR! the role 'centos-backup' was not found in /var/tmp/ansible/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/var/tmp/ansible

 

The first error is because it expects to have a variables file containing all the required values that matches the name of the OS and version, this is the part of the code:

cat roles/mysql/tasks/main.yml

Output:

…
- name: Importing specific distro variables
  include_vars: "{{ item }}"
  with_first_found:
  - "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
  - "{{ ansible_distribution }}.yml"
…

 

Although this is valid and allows for better control of which OS the role can be applied to, it limits the flexibility by forcing us to change the role every time we want to support a new OS. Let’s make this optional and let users define the required variables via the inventory.

We will add “ignore_errors” to the task:

vim roles/mysql/tasks/main.yml
…
- name: Importing specific distro variables
  include_vars: "{{ item }}"
    with_first_found:
    - "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
    - "{{ ansible_distribution }}.yml"
  ignore_errors: True

 

Now let’s see what variables are needed by looking at the file used for CentOS, which we can see in the output:

cat /var/tmp/git/ansible-role-mysql/vars/CentOS-9.yml

Output:

---
mysql_pkgs_list:
- mysql-server
- mysql
- python3-PyMySQL

mysql_service_name: mysqld

 

We can see it uses two variables, “mysql_pkgs_list” that contains the list of packages that need to be installed and “mysql_service_name” that contains the name of the service. We will also add an extra variable named “mysql_socket” which we will use later on in this example.

We will define these variables in the inventory file by adding the following to our openSUSE host entry, under “mysqlservers” group:

vim inventory
…
[mysqlservers]
myopensuse15.mydemo.lab mysql_pkgs_list="['mariadb','mariadb-client','python3-PyMySQL']" mysql_service_name=mariadb mysql_socket='/run/mysql/mysql.sock’
mycentos9.mydemo.lab
...

 

The second error happens because it can’t find a role named “centos-backup”, this role is part of a set used in the CentOS Infra, in this case we don’t need it so we will comment it out:

vim roles/mysql/tasks/main.yml
…
# Backup
#- name: Ensuring backup user and jobs
#  include_role:
#    name: centos-backup
#    tasks_from: client
...

 

If we have some tasks that don’t apply to all our systems, we can use a “when” statement, for example if we want a task to only apply to CentOS we could add the following:

#- name: Ensuring backup user and jobs
#   include_role:
#   ...
#   when: ansible_facts['distribution'] == "CentOS"

 

Looking at the task file ( roles/mysql/tasks/main.yml ) we can notice that it makes use of the “yum” Ansible module, we can use yum in openSUSE but let’s change it to “package” instead:

...
- name: install the MySQL packages
  yum:
    name: "{{ mysql_pkgs_list }}"
    state: installed
…

To:

...
- name: install the MySQL packages
  package:
    name: "{{ mysql_pkgs_list }}"
    state: installed
…

 

Now let’s try again:

First thing we notice is that despite the task still fails, Ansible continues:

…
TASK [mysql : Importing specific distro variables] **************************************************************************************************************************
fatal: [myopensuse15.mydemo.lab]: FAILED! => {"msg": "No file was found when using first_found. Use errors='ignore' to allow this task to be skipped if no files are found"}
...ignoring

 

Now we encounter a new error:

TASK [set mysql root password] **********************************************************************************************************************************************
ok: [mycentos9.mydemo.lab]
fatal: [myopensuse15.mydemo.lab]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false}

 

If we look at the task that failed in the role:

cat roles/tasks/main.yml
...
- name: set mysql root password
  mysql_user:
    name: root
    password: "{{ mysql_root_pass }}"
    no_log: True

 

We can see it contains a “no_log: True” statement, which is why we don’t see an error message. This is often used in tasks that contain passwords or sensitive information to avoid them being leaked in the logs. If we temporary comment it and run the playbook again, we will see the following error:

...
fatal: [myopensuse15.mydemo.lab]: FAILED! => {"changed": false, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1698, \"Access denied for user 'root'@'localhost'\")"}
...

 

It is complaining it can’t login, but if we try to login ourselves it works:

mysql -e 'show databases;'

Output:

+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+

 

This is because the “mysql_user” Ansible module is trying to connect using the socket but can’t find the path. To solve this, we are going to use one of the options in the “mysql_user” module to specify the socket:

vim roles/mysql/tasks/main.yml
…
- name: set mysql root password
  mysql_user:
    name: root
    password: "{{ mysql_root_pass }}"
    login_unix_socket: "{{ mysql_socket |default(omit) }}"
  no_log: True
...

And instead of hardcoding the value, we will assign it to the “mysql_socket” variable and add the “default(omit)”, which when the variable isn’t defined it will just omit the whole option, notice this variable was predefined already when we changed the inventory file and is not visible to CentOS hosts.

 

Now let’s run the playbook again:

ansible-playbook install_mariadb.yml

 

This time it finished without failure since we also removed the failing backups task, notice the output mentions one of the tasks, the one where we added “ignore_errors: True”, was ignored, and there are no failed tasks.

PLAY RECAP ******************************************************************************************************************************************************************
mycentos9.mydemo.lab : ok=8 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
myopensuse15.mydemo.lab : ok=8 changed=3 unreachable=0 failed=0 skipped=1 rescued=0 ignored=1

 

Now let’s test it:

myopensuse:/var/tmp/ansible # mysql -e 'show databases;'

+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+

 

[root@centostream9 ~]# mysql -e 'show databases;'

+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+

 

We can see it’s installed and doesn’t have the test database, the user and password are configured inside /root/.my.cnf.

 

Conclusion

Adapting your Ansible code to work with other platforms than CentOS is not so complicated as it seems, the two examples I have chosen are roles created by people very close to CentOS and it wasn’t a big effort, having said that there may be edge cases where it takes a bit more effort but the advice given in the first part should cover 80% of the cases at least.

As a CentOS alternative, openSUSE brings numerous benefits, including stability, SUSE’s support to the community, powerful system management tools, advanced distro features, and access to a rich package repository. The active openSUSE community and easy migration tools further enhance the transition process. If you are seeking a robust and reliable Linux distribution for your workloads, you should consider openSUSE.

Looking for further insights into what you can achieve by migrating to openSUSE?, check out other blogs in this series:

 


Ready to experience the power and flexibility of openSUSE Leap?

Download openSUSE Leap Now!

Share
Avatar photo
3,197 views
Raul Mahiques   Technical Marketing Manager with a focus on Security .