Share with friends and colleagues on social media

Well, SUSE Linux Enterprise Server 15 has been released for a while and SLES 15 SP1 is on the horizon. While there will be another service pack in the SLES 12 series, SLES 12 SP5, many people have expressed interest to move to SLES 15 from SLES 12. After providing a “Follow this long tedious manual process” procedure for the SLES 11 to SLES 12 migration that was certainly not for the faint of heart we wanted to provide a better easier way to migrate an instance while at the same time avoiding some of the pitfalls that are inherent in a running system migration. This gave birth to the suse-migration-services project. We are happy to announce the availability of the migration in the SLES 12 Public Cloud module.

Before we get to the details here is how it works, as root run:

zypper in SLE15-Migration suse-migration-sle15-activation
reboot

That was easy! The preconditions are that the system is registered, on-demand instances are automatically registered, and BYOS instances should be registered to SCC, SMT, or RMT.

That’s it.

——————-

Update 2019-06-27:

At least that’s what we thought…. It turns out that despite extensive testing we never tripped over the corner case with the HPC module. One of the bigger changes between SLE 12 and SLE 15 was that HPC became a stand alone product. As a result of this the HPC module is no longer available to systems registered as SUSE Linux Enterprise Server (SLES). This brings with it that in some cases the process that calculates the migration target fails to find a target and the migration process fails. Therefore it is recommended that prior to the migration the HPC module components are removed.

zypper rm sle-module-hpc-release-POOL sle-module-hpc-release

Will do the trick and remove the HPC components form the system. This will make the HPC repositories disappear and therefore reduce the chances that the migration path calculator trips up. If you need packages that are HPC specific this is obviously not what you are after. The best course of action we can recommend is that you start with the SUSE Linux Enterprise Server For HPC images.

——————

The process will take a while, and you can do other stuff or take a well deserved break. How do you know things are done?

Once you are able to login with your regular user setup the system has been migrated, well at least that is the expected result. A bit more below.

Let’s take a look what happens under the covers. The SLE15-Migration package delivers a live image and the suse-migration-sle15-activation package modifies the grub configuration such that on reboot the system will boot into the live image. The live image itself is configured such that it will automatically upgrade the system. All this happens while the systemdisk of the server to be migrated is not actively used by the system. Therefore there is no opportunity for the system to get into an inconsistent state. After the migration is finished the system will reboot and your instance will be running on SLES 15. The migration is supported with SLES 12 SP4 as the origin and SLES 15 will be the target, same versions for SLES For SAP. Please also consult the documentation.

If things should go wrong the system, when possible, will be rolled back to it’s original state. In order to debug a migration one can place /etc/sle-migration-service on the system to be migrated. This will prevent the reboot process and the migration can be tested interactively by logging into the live system. It is possible to ssh into the system during migration or if the debug file exists with

ssh migration@IP_OF_INSTANCE

Existing ssh keys are copied into the migration system, but the host keys will be different. For those that rather watch than take a break you can login to the migration system and run ‘top‘. You’ll find that ‘zypper’ is running and doing it’s upgrade magic. If you interrupt the process your system is most likely not in a state where it can be recovered, thus to prevent any accidents we recommend not to login while the migration is in progress. The ‘migration‘ user has sudo access. The log for the migration can be found in /var/log/distro_migration.log

There are some caveats to consider

  • Files marked as configuration files in RPM packages and modified will have a corresponding ‘.rpmnew’ version in the same location.
    • Public Cloud instances from SUSE images have a custom ‘/etc/motd‘ file that makes reference the distribution version this needs to be modified manually
  • Repositories not registered via SUSEConnect and added to the system manually will remain untouched.
  • For Public Cloud instances the metadata will not change, as far as the cloud framework is concerned you will still be running a SLES 12 SP4 instance even if you migrated the instance to SLES 15. This cannot be changed.
  • Migration is only possible for systems that have direct access to the root file system by the boot loader
  • Migration is only possible for systems that use unencrypted root file systems, at the OS level. Encrypting the root device using a cloud framework encryption mechanism happens at a different level.

We’d also like to thank our beta testers that provided valuable feedback during development.


Share with friends and colleagues on social media

Category: Announcements, Cloud Computing, CSP, Server, SUSE in the Cloud, SUSE Linux Enterprise Server, SUSE Linux Enterprise Server for SAP Applications
This entry was posted Friday, 24 May, 2019 at 7:44 pm
You can follow any responses to this entry via RSS.

Comments

  • Dean Obry says:

    After fully patching and upgrading my AWS SUSE12 SP3 to SUSE12 SP4 using the online tool, I verified my instance is validly registered….. I ran the below commands in zypper. Note the “No provider os SLE15-Migration found” and “Installation has completed with error.”
    root@kbssuse12-15a:~# zypper in SLE15-Migration suse-migration-sle15-activation
    Refreshing service ‘SMT-http_smt-ec2_susecloud_net’.
    Loading repository data…
    Reading installed packages…
    ‘SLE15-Migration’ not found in package names. Trying capabilities.
    No provider of ‘SLE15-Migration’ found.
    Resolving package dependencies…

    The following 2 NEW packages are going to be installed:
    SLES15-Migration suse-migration-sle15-activation

    2 new packages to install.
    Overall download size: 192.2 MiB. Already cached: 0 B. After the operation, additional 233.0 MiB will be used.
    Continue? [y/n/…? shows all options] (y): y
    Retrieving package SLES15-Migration-1.15.0-6.x86_64 (1/2), 192.2 MiB (233.0 MiB unpacked)
    Retrieving: SLES15-Migration-1.15.0-6.x86_64.rpm ………………………………………………..[done (18.3 MiB/s)]
    Retrieving package suse-migration-sle15-activation-1.2.0-6.5.1.noarch (2/2), 5.6 KiB ( 1.5 KiB unpacked)
    Retrieving: suse-migration-sle15-activation-1.2.0-6.5.1.noarch.rpm ……………………………………………[done]
    Checking for file conflicts: ……………………………………………………………………………..[done]
    (1/2) Installing: SLES15-Migration-1.15.0-6.x86_64 ………………………………………………………….[done]
    (2/2) Installing: suse-migration-sle15-activation-1.2.0-6.5.1.noarch ………………………………………….[done]
    Additional rpm output:
    Generating grub configuration file …
    Found linux image: /boot/vmlinuz-4.12.14-95.16-default
    Found initrd image: /boot/initrd-4.12.14-95.16-default
    Found linux image: /boot/vmlinuz-4.4.178-94.91-default
    Found initrd image: /boot/initrd-4.4.178-94.91-default
    done

    Installation has completed with error.
    **********************
    I have rebooted, but this instance is in a perpetual reboot loop between grub and other things. Since I have no interactive console in AWS, I have no alternative but to terminate this instance. I took some console screenshots. The boot fails with “failed to start mount system to upgrade” “see systemctl status suse-migration-mount-system.service for details” ……..of course I cant do this since I cant login. Also see “failed to start recreate grub configuration file from migrated version” and “see systemctl status suse-migration-grub-setup.service for details.

    Again, I cant login so I cant do the debug steps……
    Please send me an email with advisement here if I can do something differently. thanks

  • rjschwei rjschwei says:

    Hi,

    Sorry for the troubles. Please do not terminate the instance as it will destroy all evidence and therefore we’ll be unable to find the issue and help you get this instance back on it’s feet.

    The endless boot loop has occurred because the code was unable to generate a new grub configuration file that points to the new system setup. So you are stuck booting into the migration system. But there is nothing to migrate, so we reboot back to the system and that of course points to the migration system..

    If you stop the instance and create a snapshot of the root device, you can then create a volume from that snapshot. You can attach the created volume to a running system and we can most likely recover the system by making changes to that volume.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *