SUSE OpenStack Cloud 7

Release Notes

These release notes are generic for all SUSE OpenStack Cloud 7 components. Some parts may not apply to a particular component.

Documentation can be found in the docu language directories on the media. Documentation (if installed) is available below the /usr/share/doc/ directory of an installed system. The latest documentation can also be found online at

Publication Date: 2020-07-24 , Version: 7.20180803

1 SUSE OpenStack Cloud

Powered by OpenStack™, SUSE OpenStack Cloud is an open source enterprise cloud computing platform that enables easy deployment and seamless management of an Infrastructure-as-a-Service (IaaS) private cloud.

2 Support Statement for SUSE OpenStack Cloud

To receive support, customers need an appropriate subscription with SUSE; for more information, see

SUSE OpenStack Cloud 7 provides Monasca for Monitoring and Logging under a separate subscription as SUSE OpenStack Cloud Monitoring, which is required in addition to a SUSE OpenStack Cloud subscription. Our Monasca solution will monitor all OpenStack services and gather Libvirt and Ceph metrics from all compute and storage nodes provisioned by Crowbar. Visualization is provided through Grafana 4 dashboards integrated with OpenStack Horizon.

SUSE OpenStack Cloud 7 provides Swift for Object Storage under a separate subscription as SUSE OpenStack Cloud Swift, which is required in addition to a SUSE OpenStack Cloud subscription.

3 Major Changes in SUSE OpenStack Cloud 7

SUSE OpenStack Cloud 7 is a major update to SUSE OpenStack Cloud and comes with many new features, improvements and bug fixes. The following list highlights a selection of the major changes:

  • OpenStack has been updated to the Newton release (, and the deployment framework has been updated accordingly to support new features. On top of the new features that come by default with Mitaka ( and Newton (, here are some notable features that have been added:

    • The configuration schema for the different OpenStack services changed. Instead of configuring the different services via a single configuration file, directories with configuration snippets are now used. For more details, see the README.config file in the different configuration directories and the Administration guide.

    • The Container Module for OpenStack (Magnum) and the Alarming Service for OpenStack (Aodh) are fully integrated, and the controller side can be deployed with High Availability. A Kubernetes image based on SUSE Linux Enterprise Server is also made available in the openstack-magnum-k8s-image-x86_64 package to provide a ready-to-use and fully supported experience for the Container Module for OpenStack (Magnum).

    • Fernet tokens are used by default in OpenStack Identity (Keystone). On a related note, there is no admin token anymore and the bootstrap process for OpenStack Identity (Keystone) is now used.

    • Cross-Origin Resource Sharing (CORS) can be configured for OpenStack Image (Glance).

    • The v1 API for OpenStack Image (Glance) is disabled by default and deprecated; if required, it can however be enabled through an expert setting.

    • An endpoint for the v3 API for OpenStack Block storage (Cinder) is created automatically.

    • The Hitachi HUSVM and NFS backends for OpenStack Block storage (Cinder) can be configured directly from the web interface. The enterprise driver for Hitachi HUSVM can be downloaded from the HDS website (

    • VXLAN can be used with the Linuxbridge mechanism in OpenStack Networking (Neutron).

    • The port_security extension for the ML2 driver is enabled, which makes it possible to disable anti-spoof rules for packet filtering to allow protocols such as DHCP.

    • The Load-Balancer-as-a-Service v2 API is now used in OpenStack Networking (Neutron); in addition, support for the F5 driver for Load-Balancer-as-a-Service has been implemented.

    • The integration with Infoblox has been re-implemented in OpenStack Networking (Neutron).

    • While the EC2 API compatibility has been removed from OpenStack Compute (Nova), support for the new ec2-api component of OpenStack has been implemented.

    • OpenStack Compute (Nova) can be tuned to allow overcommitting disk when spawning instances, and to reserve memory for compute hosts.

    • The OpenStack Dashboard (Horizon) can now be deployed even when OpenStack Compute (Nova) has not been deployed.

    • The convergence engine is used by default in OpenStack Orchestration (Heat).

    • In combination with a SUSE Enterprise Storage cluster, the File Share Module for OpenStack (Manila) can now export shared filesystems to guests using the Ceph network protocol. Guests require a Ceph client in order to mount the filesystem. Refer to the documentation about known restrictions and security implications.

    • Several OpenStack components have now been moved to WSGI applications behind Apache, and all other OpenStack components now use standard systemd services.

    • The Key Manager Module for OpenStack (Barbican) and the Data Processing Module for OpenStack (Sahara) are integrated as technology preview, and the controller side can be deployed with High Availability.

    • The Application Catalog for OpenStack (Murano) is available as technology preview. It must be installed and configured manually, though. A future maintenance update will add full integration.

    • The Admin User Guide has been reorganized and renamed to Administration Guide.

    • The MariaDB database backend for OpenStack has beed integrated with full Galera support. MariaDB will deprecate PostgreSQL as a backend in the next SUSE OpenStack Cloud release. We recommend to use MariaDB for new SUSE OpenStack Cloud deployments.

    • Several expert settings have been added, such as:

      • the ability to disable the creation of the default user and the creation of the default flavors;

      • the ability to configure a fixed key for encrypted OpenStack Block Storage (Cinder) volumes;

      • the ability to enable multipath for OpenStack Block Storage (Cinder) volumes;

      • several expert options for OpenvSwitch mechanism in OpenStack Networking (Neutron);

      • the ability to define the network used to live migrate instances with OpenStack Compute (Nova);

      • the ability to enable serial console access to the instances in OpenStack Compute (Nova), instead of the usual VNC access;

      • the ability to define custom vendor data for the OpenStack Compute (Nova) metadata server, that will be passed to instances;

      • and many more!

  • The Administration Server and all nodes used for OpenStack are now using SUSE Linux Enterprise Server 12 SP2 as operating system.

  • SUSE OpenStack Cloud 7 integrates with SUSE Enterprise Storage 4. It can either deploy Ceph as an integrated solution of SUSE OpenStack Cloud 7 or can connect to an externally deployed SUSE Enterprise Storage cluster. Ceph support requires a subscription for SUSE Enterprise Storage.

  • The Crowbar deployment framework also comes with several highlights:

    • Crowbar now uses PostgreSQL as database, for better performance and improved support for concurrent requests.

    • The deployment framework behaves much better at scale, with hundreds of nodes.

    • The definitions of the various networks, including options like the MTU, can be changed more easily after the initial deployment. This should still be handled with care after OpenStack has been deployed to avoid service outage.

    • In order to reduce CPU overhead related to network traffic, rx/tx offloading is now enabled by default. It can still be disabled if desired.

    • World Wide Name identifiers are now used as the most persistent identifiers for disks, to work around issues with various hardware.

    • In addition to custom A records, custom CNAME records can now be defined in the DNS zone managed by Crowbar.

  • Various improvements to high availability support have been included:

    • The definition of OpenStack resources in Pacemaker are more fine-grained, with less groups and interleaved clones, in order to avoid excessive and undesired service restarts on manual interventions or on service check failures. This leads to better availability of services.

    • The volume service of OpenStack Block Storage (Cinder) can be deployed with high availability in an active/passive mode. This is however not supported for the Local File and Raw Devices backends due to the use of local storage with those backends.

    • The high availability of OpenStack Networking (Neutron) routers has seen many changes: it is compatible with Distributed Virtual Routers (DVR), it migrates routers to least busy agents by default and synchronous migrations lead to a greatly enhance reliability.

    • The setup for DRBD devices has been made more compatible with the Pacemaker handling of DRBD, to avoid unexpected interactions with systemd.

4 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE. These features are not supported. They may be functionally incomplete, unstable or in other ways not suitable for production use. They are mainly included for customer convenience and give customers a chance to test new technologies within an enterprise environment.

Whether a technology preview will be moved to a fully supported package later, depends on customer and market feedback. A technology preview does not automatically result in support at a later point in time. Technology previews could be dropped at any time and SUSE is not committed to provide a technology preview later in the product cycle.

Please, give your SUSE representative feedback, including your experience and use case.

SUSE OpenStack Cloud 7 ships with the following technology previews:

  • Key Manager Module for OpenStack (Barbican), and the respective Crowbar barclamp for deploying it.

  • Data Processing Module for OpenStack (Sahara), and the respective Crowbar barclamp for deploying it.

  • Database-as-a-Service for OpenStack (Trove), and the respective Crowbar barclamp for deploying it.

  • Application Catalog for OpenStack (Murano).

  • OpenStack Bare Metal (Ironic).

  • DNS-as-a-Service for OpenStack (Designate).

  • EqualLogic driver for Cinder.

  • MongoDB, as database for Ceilometer.

5 Deprecated and Removed Features

The following features are deprecated as of SUSE OpenStack Cloud 7:

  • Following the upstream deprecation, the v1 API for OpenStack Image (Glance) is deprecated and will be removed in the next version of SUSE OpenStack Cloud.

  • Following the upstream deprecation that started in OpenStack 2014.1 (Icehouse), the XML format for OpenStack APIs is deprecated and unsupported. Migrating to the JSON format for the APIs is highly recommended. Most clients should not be impacted, as the most widely used client libraries are already using the JSON format.

  • The crowbar command line utility is deprecated in favor of the crowbarctl command line utility.

  • The PostgreSQL OpenStack database backend will be deprecated and removed in the next SUSE OpenStack Cloud release.

The following features have been removed in SUSE OpenStack Cloud 7:

  • The command line client for Keystone (/usr/bin/keystone) was removed. Please use /usr/bin/openstack to interact with the identity service.

  • Support for PKI token in OpenStack Identity (Keystone) has been removed, following the recommendation to not use it in SUSE OpenStack Cloud 6 due to Keystone PKI Token Revocation Bypass (CVE-2015-7546).

  • The ability to convert images on import has been removed from OpenStack Images (Glance).

  • The Load-Balancer-as-a-Service v1 API has been removed from OpenStack Networking (Neutron).

  • The Docker driver in OpenStack Compute (Nova) has been removed, in favor of the Container Module for OpenStack (Magnum).

  • Support for using the Hyper-V hypervisor in OpenStack Compute (Nova) has been removed; in case this feature is important to you, please contact your SUSE representative.

6 Upgrading to SUSE OpenStack Cloud 7

Upgrading to SUSE OpenStack Cloud 7 is supported from SUSE OpenStack Cloud 6, and requires all the latest maintenance updates to be applied, as well as access to maintenance updates from SUSE OpenStack Cloud 7. If running a previous version, please first upgrade to SUSE OpenStack Cloud 6.

The upgrade will be non-disruptive for the workloads if all prerequisites are met: high availability setup, enough compute resources, etc. This means that the instances running in OpenStack will keep running, will still have network connectivity and access to OpenStack resources such as volumes during the whole upgrade process. However, the OpenStack APIs and the OpenStack Dashboard will be turned off during the upgrade process, which may impact end users of the cloud.

If a non-disruptive upgrade is not possible due to unmet prerequisite, then the disruptive process can be used. In this mode, the whole OpenStack infrastructure will be turned off for the upgrade, and it is important to suspend all running instances during the upgrade. However, it is not necessary to do so at the beginning of the upgrade procedure, as this step can be postponed until after the Administration Server has been upgraded to SUSE OpenStack Cloud 7, in order to keep the downtime as short as possible.

The upgrade is done via a Web interface guiding you through the process. The process will generate a backup of the Administration Server as well as a dump of the OpenStack database. It is highly recommended to save these data to allow recovery, should the upgrade process go wrong.

The complete upgrade process is documented in the Deployment Guide, which can be found online at

7 Documentation and Other Information

  • Read the READMEs on the DVDs.

  • Get the detailed changelog information about a particular package from the RPM (with filename <FILENAME>):

    rpm --changelog -qp <FILENAME>.rpm
  • Check the ChangeLog file in the top level of DVD1 for a chronological log of all changes made to the updated packages.

  • Find more information in the docu directory of DVD1 of the SUSE OpenStack Cloud 7 DVDs. This directory includes PDF versions of the SUSE OpenStack Cloud documentation.

  • contains additional or updated documentation for SUSE OpenStack Cloud.

  • Visit for the latest product news from SUSE and for additional information on the source code of SUSE Linux Enterprise products.

8 Limitations

  • The SLES 12 SP2 nodes deployed through SUSE OpenStack Cloud are not compatible with the Public Cloud Module for SLES 12 SP2, because SUSE OpenStack Cloud provides more recent versions of the OpenStack client tools.

  • The x86_64 architecture is the only supported architecture for the Administration Server and the nodes managed by SUSE OpenStack Cloud. Please note that IBM z Systems integration relies on the OpenStack Compute (Nova) driver that translates commands to z/VM, and that is running on a x86_64 node. More details about how to setup the IBM z Systems integration are available in the Deployment Guide.

    Full support for s390x will be delivered in a future maintenance update.

9 Known Issues

  • In some cases, using High Availability with multicast transport on Neutron L3 nodes is causing issues due to conflicts with the networking configuration required by Neutron. This can lead, in the worst case, to breakage of the High Availability cluster. It is advised to use the unicast transport for High Availability to avoid this.

  • Live migration of instances only works between homogeneous compute nodes: the nodes need to have the same CPU features.

  • Removal of barclamps from a node does not necessarily shut down associated services or remove associated packages. This means that you may well run into problems if moving barclamp roles from one node to another. Manual remediation may be required in these cases.

  • No pre-built image for Heat or Trove is shipped with SUSE OpenStack Cloud; cloud administrators are responsible for creating such images.

  • Limited enablement for vCenter Hypervisor in SUSE OpenStack Cloud; Starting with Newton, OpenStack no longer supports vCenter integration without NSX setup, however the latter is not yet supported in SUSE OpenStack Cloud;. This will be improved in a future maintenance update.

10 How to Obtain Source Code

This SUSE product includes materials licensed to SUSE under the GNU General Public License (GPL). The GPL requires SUSE to provide the source code that corresponds to the GPL-licensed material. The source code is available for download at Also, for up to three years after distribution of the SUSE product, upon request, SUSE will mail a copy of the source code. Requests should be sent by e-mail to or as otherwise instructed at SUSE may charge a reasonable fee to recover distribution costs.

Print this page