Upgrading your running on demand instances in the Public Cloud | SUSE Communities

Upgrading your running on demand instances in the Public Cloud

Share
Share

On December 15, 2015 was First Customer Ship (FCS) date for SUSE Linux Enterprise Server 12 SP1 and images were published in AWS EC2, Google Compute Engine and Microsoft Azure. With this release the transition phase for SLES 12 images/instances has started. One of the features of SLES 12 SP1 is that it provides an upgrade path from SUSE Linux Enterprise Server 11 SP4, which was not available in SLES 12. While the SUSE Linux Enterprise Server 11 release stream still has a lot of life left (until March 31st 2019), waiting that long for an upgrade to SLES 12 SP1 will accumulate a longer string of upgrades that will need to be stacked and performed later. By the way, information about SUSE product support dates are found here.

SUSE Linux Enterprise Server 12

As the 6 month clock for SLES 12 is ticking we’ll start with the upgrade process for SUSE Linux Enterprise Server 12. Note that this does not yet apply to HPC instances running in Microsoft Azure. Due to changes to the host configuration in Microsoft Azure for HPC other changes at he image level and a separate upgrade process will be documented when the time comes and all the parts fit together.

First double check the version of SLES you are running in your instance as follows:

zypper products -i | grep Server

This will produce a one line output as follows for a SLES 12 system:

i | @System | SLES | SUSE Linux Enterprise Server 12 | 12-0 | x86_64 | Yes

Once you have confirmed that you are on SLES 12 the upgrade path to SLES 12 SP1 is as follows:

1.) zypper up

2.) cd /etc/zypp/repos.d

3.) for i in *SLES12*;do newname=${i/12/12-SP1}; mv $i $newname; done

4.) for i in *SDK12*;do newname=${i/12/12-SP1}; mv $i $newname; done

5.) sed -i s/12/12-SP1/ *SP1*.repo

6.) zypper refresh

7.) zypper in zypper

This will produce a request to resolve a package requirement dependency issue indicated by a message starting as follows:

Problem: zypper-1.12.23-1.3.x86_64 requires libzypp.so.1519()(64bit), but this requirement cannot be provided

There is nothing to worry about the package that provides the necessary library was renamed and the proper (libyui-ncurses-pkg7) will automatically be pulled in. Therefore the solution is to select “Solution: 1

Solution 1: deinstallation of libyui-ncurses-pkg6-2.46.1-3.4.x86_64

by entering “1” at the prompt. Then enter “y” at the next prompt

The final step is to do the distribution upgrade.

8.) zypper dup

Now if you run the ‘zypper products -i | grep Server‘ you will see that you have successfully upgraded your running instance to SLES 12 SP1, all that’s left to do is reboot the instance at your earliest convenience.

If you have the default motd (message of the day file, /etc/motd) you will be greeted with a message that announces the system as being SLES 12 the next time you log in, thus you might want to change this to avoid any confusion. A simple

9.) sed -i s/Server\ 12/Server\ 12-SP1/ /etc/motd

will do the trick.

If all your running instances are SLES 12 based you are done and can skip the rest of this blog, unless of course you are curious what comes along with a major version upgrade.

SUSE Linux Enterprise Server 11 SPx

Update 2019-12-16

This process will stop working after May 31, 2020, see Step 2 Toward Enhanced Update Infrastructure Access for details. Also note the process is quite old and things have drifted since the original post. We have made an effort to keep this working until May 31, 2020.

End update 2019-12-16

If way back when you started your cloud endeavor you started on SUSE Linux Enterprise Server 11 SP3 or even earlier and followed the upgrade procedures described in the Life Cycle Blog and/or the SLES 11 SP4 announcement blog then you should be running SUSE Linux Enterprise Server 11 SP4 and the following instructions apply just as if your instance originated from a SLES 11 SP4 image. If your ‘zypper products -i | grep Server‘ does not produce a line that contains “SUSE Linux Enterprise Server 11 SP4” please upgrade to SLES 11 SP4 first as outlined in the previous blogs before moving to SLES 12 SP1.

As this is a major distribution upgrade things are a bit more complicated than for a service pack upgrade. Therefore this also requires a disclaimer, of course.

First you should backup your data, just in case something goes terribly wrong. Further, there are many packages you may have installed on your running system in addition to the packages that are part of the base image. These packages may have an effect on the way the dependency solver charts its course through the package changes between the two distributions. Thus, you want to evaluate the solutions the solver proposes and take appropriate action. The solution steps shown during the procedure apply to the migration of a SLES 11 SP4 system as released by SUSE with no additional packages installed. It is not feasible for us to test all possible combinations of package installations. In AWS where some may still run 32-bit systems, please note it is not possible to migrate from a 32-bit system to a 64-bit system and, as announced in the Life Cycle blog 32-bit images are no longer maintained. There are also changes to applications, MariaDB is shipped with SLES 12 while MySQL was shipped with SLES 11 and thus appropriate actions need to be taken. PostgreSQL has been updated to a newer version and manual intervention is required for DB migration as well. The Upgrade Preparation Guide contains helpful information for the DB migrations, sections 14.1.5 and 14.1.6.

Many things have changed since the first release of SLES 11. YaST has been completely rewritten in Ruby, Networking is now handled by wicked, system registration uses a different protocol, as well as upgrades to libraries, kernel etc. The changes around YasT, Networking, system registration, and system initialization need to be handled more or less manually. For a more complete guide of the feature changes please consult the SLES 12 SP1 Release Notes . The following step by step guide is expected to render a fully functional SLES 12 SP1 system at the end.

We start out as usual by first updating the running instance to the latest state of the art:

1.) zypper up

The repository structure also has undergone major changes and thus rather than trying to bend existing configurations into shape it is much easier to simply get new pre-configured repositories

2.) cd /etc/zypp/repos.d

3.) rm *.repo

The pre-configured repositories are different for the different cloud frameworks and are in different locations. Pull the repository configuration as follows:

for AWS EC2:

4.) wget http://54.197.240.216/sles12sp1-ondemand-repos.tar.bz2
4 a.) sha1sum sles12sp1-ondemand-repos.tar.bz2
ba144d80107d265135b38b78d30175493c9ca63b sles12sp1-ondemand-repos.tar.bz2
for GCE:

4.) wget http://108.59.80.221/sles12sp1-ondemand-repos.tar.bz2
4 a.) sha1sum sles12sp1-ondemand-repos.tar.bz2
6243e12096b1c0782883a624c31261f5a4ce54ab sles12sp1-ondemand-repos.tar.bz2

for Microsoft Azure:

4.) wget http://52.147.176.11/sles12sp1-ondemand-repos.tar.bz2
4 a.) sha1sum sles12sp1-ondemand-repos.tar.bz2
716dec1db99590e2f275d5caca90f2edb2c556ea sles12sp1-ondemand-repos.tar.bz2

Unpack the repos and dispose of the tarball

5.) tar -xjf sles12sp1-ondemand-repos.tar.bz2
6.) rm sles12sp1-ondemand-repos.tar.bz2

With the new repositories configured we need to refresh the data for zypper

7.) zypper refresh

On AWS EC2 PV instances a new repository will be added that strictly speaking is not necessary. You can choose to add the repo by trusting the key, enter “a” at the prompt, or not accept the repo by entering “r” at the prompt. If you do not want the repository simply remove the file at your convenience  with

rm /etc/zypp/repos.d/nVidia-Driver-SLE12.repo

after you entered “r“.

Now it is time to deal with the changes that would otherwise cause trouble if we would just update zypper and then upgrade the system. First lets get rid of YaST completely, we’ll install the new modules again later.

8.) zypper rm yast2 yast2-core yast2-libyui

Another big change for SLES 12 was the move to systemd as the initialization system, and this is where we will start with the upgrade process.

9.) zypper in systemd

This install request requires manual interaction in helping zypper cope with the package and package dependency changes. For each transaction that requires manual interaction zypper displays a message of the form:

Problem: * PACKAGE_NAME requires ….

The “*” is a placeholder and may be empty depending on the condition zypper encounters. For example a message may be

Problem: cloud-regionsrv-client-6.4.3-26.1.noarch requires systemd, but this requirement cannot be provided

or

Problem: solvable libffi4-5.2……

The tables below list the package name zypper will show in the messages in the left hand column and shows the solution to be selected in the middle column. The right hand column mentions the action and package name of the first entry in the solution proposed by zypper. The solution path is different on the various cloud platforms as the packages installed in the SLES 11 SP4 image published are different. Simply type the number at the prompt and hit “Enter”.

for AWS EC2:

systemd-210 1 deinstallation of sysvinit
gvfs-backends-1.4.3 2 deinstallation of gvfs-backends
cloud-regionsrv-client-6.4.0 * 1 deinstallation of cloud-regionsrv-client
apache2-mod_php53-5.3.17 3 deinstallation of syslog-ng
libsnmp15-5.4.2.1 2 deinstallation of libsnmp15
perl-satsolver-0.44.5 2 deinstallation of perl-satsolver
blt-2.4z 2 deinstallation of OpenIPMI
python-m2crypto-0.21.1 2 deinstallation of scout
python-ply-3.6 4 deinstallation of rpm-python
libffi4-5.2.1 ** deinstallation of command-not-found

* Only for HVM instances
** Solution 2 on PV instances, Solution 3 on HVM instances

for GCE:

systemd-210 1 deinstallation of sysvinit
cloud-regionsrv-client-plugin-gce-1.0.0 2 deinstallation of cloud-regionsrv-client-plugin-gce
gvfs-backends-1.4.3 2 deinstallation of gvfs-backends
libblocxx6-2.1.0.342 3 deinstallation of syslog-ng
libsnmp15-5.4.2.1 2 deinstallation of libsnmp15
perl-satsolver-0.44.5 2 deinstallation of perl-satsolver
python-satsolver-0.44.5 2 deinstallation of python-satsolver
python-crcmod-1.7 2 deinstallation of scout
python-m2crypto-0.21.1 4 deinstallation of rpm-python
python-python-gflags-2.0 4 deinstallation of command-not-found

for Microsoft Azure:

systemd-210 1 deinstallation of sysvinit
gvfs-backends-1.4.3 2 deinstallation of gvfs-backends
cloud-regionsrv-client-6.4.0 1 deinstallation of cloud-regionsrv-client
libblocxx6-2.1.0.342 3 deinstallation of syslog-ng
libsnmp15-5.4.2.1 2 deinstallation of libsnmp15
perl-satsolver-0.44.5 2 deinstallation of perl-satsolver
limal-ca-mgm-perl-1.5.24 2 deinstallation of OpenIPMI
python-m2crypto-0.21.1 1 deinstallation of scout
libffi4-5.2.1+r226025 2 deinstallation of rpm-python

After all the questions are answered it is time to let zypper do its magic and switch the system over from SysVinit to systemd. This update will also bring with it the new version of zypper and a few other goodies, including wicked for network management.

With systemd installed we are one step closer to a SLES 12 SP1 system.

Another change in SLES 12 is the way zypper handles credentials and thus we need to make some changes to handle this and preserve our repositories.

10.) cd /etc/zypp/credentials.d
11.) cp NCCcredentials SCCcredentials

for AWS EC2:

12.) mv NCCcredentials SMT-http_smt-ec2_susecloud_net

for GCE:

12.) mv NCCcredentials SMT-http_smt-gce_susecloud_net

for Microsoft Azure:

12.) mv NCCcredentials SMT-http_smt-azure_susecloud_net

Now we need to modify the repository configuration to pick up the new credentials file.

13.) cd /etc/zypp/repos.d

for AWS EC2:

14.) sed -i s/NCCcredentials/SMT-http_smt-ec2_susecloud_net/ *.repo

for GCE:

14.) sed -i s/NCCcredentials/SMT-http_smt-gce_susecloud_net/ *.repo

for Microsoft Azure:

14.) sed -i s/NCCcredentials/SMT-http_smt-azure_susecloud_net/ *.repo

Almost there. Next we need to refresh the repository data and with it re-import all the signing keys.

15.) zypper refresh

For each repository zypper will ask to confirm if the signing key should be imported. Answer each question with “yes“, sorry this involves a lot of typing.

Now we need one of the packages back that was removed earlier.

16.) zypper in --replacefiles cloud-regionsrv-client

Select solution “1“. In this install we added “–replacefiles” option to instruct zypper to accept the new packages even if potential file conflicts may arise.

For the next zypper operation do not worry about the failure of the refresh of the “cloud_update” service at the beginning of the output of the zypper command . It is caused by a mismatch in the PYTHON_PATH as we are currently in a system state somewhere in between SLES 11 SP4 and SLES 12 SP1. Once the process is complete and the migration to SLES 12 SP1 is done this will correct itself.

on GCE We need an extra package:

16.a) zypper in cloud-regionsrv-client-plugin-gce

Finally we are ready for the big switch over and the full system upgrade.

17.) zypper dup --replacefiles

Once you answer “y“, zypper will do its magic and you have some time for a beverage. The performance of the upgrade process heavily depends on whether your instance is SSD backed or runs on a spinning media backend.

With the system upgrade complete we need to perform a few more steps to make certain everything is consistent. For the following installations zypper will produce the “Installation has completed with error.” message at the end of each install step. This is caused by the intermediate system state we find ourselves in. At this point there is nothing to worry about.

Start out by installing YaST and the YaST modules that we removed in the early part of this migration process:

18.) zypper in --replacefiles yast2-slp-server yast2-country yast2-sudo yast2-trans-stats yast2-squid yast2-nfs-client yast2-iscsi-client yast2-nfs-common yast2-dns-server yast2-pam yast2-update yast2-storage yast2-network yast2-slp yast2-nis-server yast2-transfer yast2-sysconfig yast2-packager yast2-audit-laf yast2-http-server yast2-dbus-server yast2-installation yast2-add-on yast2-ruby-bindings yast2-hardware-detection yast2 yast2-support yast2-services-manager yast2-ntp-client yast2-iscsi-lio-server yast2-core yast2-online-update yast2-bootloader yast2-tune yast2-pkg-bindings yast2-ycp-ui-bindings yast2-security yast2-users yast2-samba-server yast2-ftp-server yast2-trans-en_US yast2-xml yast2-proxy yast2-firewall yast2-online-update-frontend yast2-docker yast2-samba-client yast2-mail yast2-isns yast2-dhcp-server yast2-schema autoyast2-installation yast2-perl-bindings yast2-tftp-server yast2-printer yast2-ca-management yast2-ldap yast2-nis-client yast2-inetd yast2-theme-SLE yast2-country-data yast2-nfs-server yast2-kdump

At some point the command-not-found utility was removed, lets add this back as well:

19.) zypper in command-not-found

There are also some packages that fall between the cracks in this migration and a few packages that are useful but did not yet exist when the SLES 11 SP4 images were released, lets get those installed next

20.) zypper in cracklib-dict-small dhcp dhcp-client fonts-config grub2-x86_64-xen mozilla-nss-certs patterns-sles-Minimal sle-module-adv-systems-management-release sle-module-containers-release sle-module-legacy-release sle-module-public-cloud-release sle-module-web-scripting-release

For AWS EC2 we developed some tools that help with image management and you may install those if you are interested.

20 a.) zypper in python-ec2deprecateimg python-ec2publishimg python-ec2uploadimg python-ec2utilsbase

For Microsoft Azure we have started work on command line tools  a while back and a package that provides a good set of functionality, including the Azure Files feature is available

20 a.) zypper in python-azurectl

As the init system has changed services need to be re-enabled. The following lists the minimum services to be able to get back into your system after reboot.

21.) systemctl enable guestregister.service
systemctl enable haveged.service
systemctl enable irqbalance.service
systemctl enable iscsi.service
systemctl enable iscsid.socket
systemctl enable lvm2-lvmetad.socket
systemctl enable ntpd.service
systemctl enable purge-kernels.service
systemctl enable rsyslog.service
systemctl enable sshd.service
systemctl enable wicked.service

In addition there are some framework specific services that need to be re-enabled.

for AWS EC2:
22.) systemctl enable cloud-init-local.service
23.) systemctl enable cloud-init.service
24.) systemctl enable cloud-config.service
25.) systemctl enable cloud-final.service

for GCE:
22.) systemctl enable google.service
23.) systemctl enable google-accounts-manager.service
24.) systemctl enable google-address-manager.service
25.) systemctl enable google-startup-scripts.service

for Microsoft Azure:
22.) systemctl enable hv_fcopy_daemon.service
23.) systemctl enable hv_kvp_daemon.service
24.) systemctl enable hv_vss_daemon.service
25.) systemctl enable waagent.service

If you had other services enabled, such as Apache you want to enable those services at this time as well.

With the system initialization changed to systemd the information in inittab is no longer useful, lets clear out the file.

26.) echo "" > /etc/inittab

The implementation of ssh has gained some new ciphers and thus we want to generate any keys that might be missing.

27.) /usr/sbin/sshd-gen-keys-start

There were also some changes in PAM (Pluggable Authentication Module) and fiddling with the existing configuration files would be rather cumbersome. Thus a tarball with the configuration files needs to be downloaded and installed. If you have custom files please copy those to a “safe” place and restore them after the next few operations:

28.) cd /etc/pam.d
29.) rm *

for AWS EC2:

30.) wget http://54.197.240.216/pam_sles12_sp1_config.tar.bz2

for GCE:

30.) wget http://108.59.80.221/pam_sles12_sp1_config.tar.bz2

for Microsoft Azure:

30.) wget http://23.101.123.131/pam_sles12_sp1_config.tar.bz2

The checksum is framework indipendent in this case:
sha1sum pam_sles12_sp1_config.tar.bz2
6cd0fde2be66b7e996c571bec1b823ba0eb8d9a0 pam_sles12_sp1_config.tar.bz2

31.) tar -xjf pam_sles12_sp1_config.tar.bz2
32.) rm pam_sles12_sp1_config.tar.bz2

If you had any custom configuration files now is the time to restore them. Note that there were some significant changes in PAM and thus you should review your configuration files for compatibility. If you had modifications to existing “standard” files then you want to merge those modifications rather than clobber the files that were just installed.

The default syslog handling has also changed from syslog-ng in the SLES 11 release series to rsyslog in SLES 12. If you are dependent on syslog-ng for your setup you will need to install the syslog-ng package (zypper in syslog-ng) which will remove the installed rsyslog package.

As with the SLES 12 to SLES 12 SP1 migration described at the beginning you will probably want to change the motd file ‘/etc/motd‘ to avoid version confusion for later logins.

33.) sed -i s/11/12/ /etc/motd
34.) sed -i s/SP4/SP1/ /etc/motd

If you are not upgrading an AWS EC2 ParaVirtual (PV) instance your system is now ready for reboot. If you started with a ParaVirtual instance (PV) some other changes need to be made to be able to boot after shutdown. If you are not certain if you are on a PV instance you can run

35.) ec2metadata | grep aki

If the return is empty you are running an HVM instance and you are good to go, i.e. reboot and run on your freshly migrated SLES 12 SP1 system. If the return is not empty you have a few more steps to complete. The additional steps are necessary to handle the bootloader change that occured between SLES 11 and SLES 12. In SLES 12 the bootloader is GRUB2, while in SLES 11 the bootloader is GRUB. This change requires that a new boot kernel (aki) is associated with the instance.

First we want to stop the running instance that was just migrated to SLES 12 SP1

36.) aws ec2 stop-instances --region THE_REGION_YOU_ARE_WORKING_IN --instance-ids THE_INSTANCE_ID_OF_THE_UPGRADED_INSTANCE

Once the image is stopped, monitor with:

37.) aws ec2 describe-instances --region THE_REGION_YOU_ARE_WORKING_IN --instance-ids THE_INSTANCE_ID_OF_THE_UPGRADED_INSTANCE

you may proceed. Unfortunately there is no simple unique state identifier and thus you need to look through the returned JSON structure for the “State” element and then look at the “Name” key to determine the state of the instance. When it has the value “Stopped” you can proceed with the next steps.

The following table contains the aki-ID for each region that needs to be associated with the stopped instance.

ap-northeast-1 aki-f1ad9bf0
ap-southeast-1 aki-ca755498
ap-southeast-2 aki-8faec3b5
cn-north-1 aki-9c4ad8a5
eu-central-1 aki-e23f09ff
eu-west-1 aki-fce8478b
sa-east-1 aki-b99024a4
us-east-1 aki-f4bc469c
us-gov-west-1 aki-07026424
us-west-1 aki-f9786dbc
us-west-2 aki-5f125d6f

Note that the new region, ap-northeast-2, is not listed in the table. The new region does not support PV instances.

38.) aws ec2 modify-instance-attribute --region THE_REGION_YOU_ARE_WORKING_IN --instance-id THE_INSTANCE_ID_OF_THE_UPGRADED_INSTANCE --kernel VALUE_FROM_TABLE_ABOVE

If you now re-execute the “describe-instances” command from step 37 you will notice that the aki ID in the output has changed. With the instance associated with an aki that can handle GRUB2 you can now start the instance.

39.) aws ec2 start-instances --region THE_REGION_YOU_ARE_WORKING_IN --instance-ids THE_INSTANCE_ID_OF_THE_UPGRADED_INSTANCE

After all this what did we end up with? You now have a fully functional SLES 12 SP1 instance running. One of the differences to a SLES 12 SP1 instance started from scratch is that the upgraded instance contains more system packages. This is due to the fact that our released SLES 11 images contained more packages compared to the SLES 12 images. The reduced number of packages is based on the thought that it is super fast and easy to install any package that one might need. Additionally many users connect instances to a configuration management system and the “check if something exists” is almost time equivalent to “install whatever is missing”, especially with the SUSE supported region local update infrastructure. If you want a detailed comparison between the systems you can use machinery to compare to systems. If the systems are not on the same network the guide Comparing 2 Systems Using Machinery may be useful.

Update 2019-12-12

As the final step and related to the update infrastructure upgrade you want to re-register your instances

40.) registercloudguest --force-new

End update

With this we hope you will enjoy the improvements made in SUSE Linux Enterprise Server 12 SP1 and take advantage of the flexibility that the Module Architecture provides. Also if you want to share information don’t forget the SUSE Forums.

Share

Comments

  • Avatar photo monoclesys says:

    Terrific blog. For the SLES 12 repositories for an AWS instance, is that URL still valid? I’m getting a 404 error at http://54.197.240.216/pam_sles12_sp1_config.tar.bz2

  • Avatar photo anshulsomani says:

    I am facing the same Error. Has the location moved or this method is no longer valid ?

  • Avatar photo rjschwei says:

    Sorry about the trouble. The file has been restored.

  • Avatar photo watsonrp2 says:

    Hi rjschwei,

    Just like to make you aware that the link http://54.197.240.216/sles12sp1-ondemand-repos.tar.bz2 in step 4 for AWS EC2 is also 404’ing. Can this please be corrected also?

    Cheers,

    Ricky.

  • Avatar photo pedelweiss says:

    Hi,
    Same probleme,
    on 1 instance I have : 404 not found
    on ohter instance, i can download something but checksum isn’t ok :
    ha1sum sles12sp1-ondemand-repos.tar.bz2
    2491c167775d44920e5559f20a06d384c1a742e8 sles12sp1-ondemand-repos.tar.bz2

  • Avatar photo rjschwei says:

    The missing file in AWS EC2, sles12sp1-ondemand-repos.tar.bz2, has also been restored. I updated the blog to show the new checksum of the file.

    Note that due to changes in zypper since the blog has been written the solution path is different than in the blog post. Thus carefully evaluate each proposed solution. Do not choose any option that suggests the removal of libzypp.

  • Avatar photo anshulsomani says:

    Hi Robert,
    Installation of Systemd will give you two solutions:
    1. Deinstall libzypp with a whole bunch pf packages
    2. do not install systemd at all .

    Please advise

  • Avatar photo monoclesys says:

    Did anyone else find that they had to replace /etc/mtab with a symbolic link to /proc/self/mounts, otherwise they would face a corrupted mtab?

  • Avatar photo salvatorejanssen says:

    IS there a URL for sles12sp0-ondemand? My use case is to match an existing sles12sp0 (GA) installation. It would be great to have a link to this version as well.

  • Avatar photo godfreygerona says:

    Are the URLs/Files still valid?
    Can you please share to us how to upgrade to SLES 12 SP2 from SLES 11?

  • Avatar photo godfreygerona says:

    All of the wget links above have an error below:

    HTTP request sent, awaiting response… 404 Not Found

    Are the files missing or have been moved to another host?

  • Avatar photo rjschwei says:

    All the files have been restored, they got lost during the recent infrastructure server upgrade, sorry.

    And we were already happy that no one noticed the server upgrade, oh well we forgot these pieces, it will not happen again.

  • Avatar photo kyrill says:

    Is there a similar instruction for upgrading SLES 12 to SLES 15?

  • Avatar photo rjschwei says:

    Hi,

    For the SLES 12 to SLES 15 transition we are working on a new migration system that is currently in private beta. The migration will be automated as much as possible. It requires more down time than the above process but is much less error prone.

  • Avatar photo kyrill says:

    Hello,
    Thank you very much for the fast reply!
    Looking forward to the release.

  • Avatar photo Þanda says:

    Hello, I am currently try to upgrade from SLES 11 SP4 to SLES 12 SP1. Is there any instruction for SLES 11 SP4 to SLES 15?

    Cheers,
    Þanda

  • Avatar photo rjschwei says:

    There is no direct upgrade path from SLES 11 SP4 to SLES 15, You have to move from one major distribution to the next and then make another jump. As stated in a previous comment we are still working on a SLES 12 to SLES 15 transition.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Avatar photo
    431,803 views