Recommended update for kubernetes-salt and velum

Announcement ID: SUSE-RU-2019:0974-1
Rating: moderate
References:
Affected Products:
  • SUSE CaaS Platform 3.0

An update that has 12 fixes can now be installed.

Description:

This update resolves the following issues:

Velum:

  • Node removal would fail when orchestration was incorrectly registered as still in progress
  • All nodes would show as failed after an update
  • Incorrect information shown on how to download/use the kubeconfig file
  • The velum user had too many permissions to manipulate the MariaDB

    Please check if your installation is affected by running: docker exec -it $(docker ps -qf name=velum-mariadb) \ mysql -p$(cat /var/lib/misc/infra-secrets/mariadb-root-password) -e "SHOW GRANTS FOR velum@localhost"

    The user permissions should return: +-----------------------------------------------------------------------------------------------------------------+ | Grants for velum@localhost | +-----------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'velum'@'localhost' IDENTIFIED BY PASSWORD '' | | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON `velum_production`.* TO 'velum'@'localhost' | +-----------------------------------------------------------------------------------------------------------------+

    If the user account still has GRANT ALL PRIVILEGES, please adjust the privileges for the user by running: docker exec -it $(docker ps -qf name=velum-mariadb) \ mysql -p$(cat /var/lib/misc/infra-secrets/mariadb-root-password) \ -e "REVOKE ALL PRIVILEGES ON velum_production.* FROM velum@localhost; \ GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER ON velum_production.* TO velum@localhost" - Nodes could become unresponsive if too many resources were reserved - System wide certificates removed from Velum were not removed from the cluster nodes - Certificates with Windows line endings could cause errors during external LDAP setup

Kubernetes Salt:

  • Removing the system wide proxy configuration was not applied correctly and configuration remained in place
  • Bootstrap of the cluster would fail
  • Removed an obsolete custom module
  • Modules for the reactor component were synchronized from multiple operations and could cause race conditions of the saved state
  • The automatic transactional-update timer did not remain disabled during an upgrade

CaaSP Container Manifests:

  • Admin node container would fail to start

Patch Instructions:

To install this SUSE update use the SUSE recommended installation methods like YaST online_update or "zypper patch".
Alternatively you can run the command listed for your product:

  • SUSE CaaS Platform 3.0
    To install this update, use the SUSE CaaS Platform 'skuba' tool. It will inform you if it detects new updates and let you then trigger updating of the complete cluster in a controlled way.

Package List:

  • SUSE CaaS Platform 3.0 (noarch)
    • kubernetes-salt-3.0.0+git_r969_5d274fb-3.61.1
    • caasp-container-manifests-3.0.0+git_r305_95f7c0b-3.17.1
  • SUSE CaaS Platform 3.0 (x86_64)
    • sles12-velum-image-3.1.13-3.47.2

References: