I have been working with ZENworks Linux Management (ZLM) while deploying our new IBM Bladecenter with a bunch of SUSE Linux Enterprise Server 9 (SLES9) and Open Enterprise Server (OES) servers. During the process of figuring it out for myself, I wrote two short articles about deploying SUSE Linux Enterprise Server (SLES) and Open Enterprise Server (OES) patches with ZLM.
Using the information in these two articles, it is possible to use ZLM to keep your SLES9 and OES servers up to date.
- How to Mirror OES and SLES9 patches using ZENworks 7
- Using ZLM Mirrors of OES and SLES to Update Local Servers
My last article discussed how to mirror SLES9 and OES server update services from Novell’s sites to your local ZENworks Linux Management server. This tip discusses how to use those mirrored repositories to update all your local servers from your ZENworks mirrors.
I have tested this with SLES9-i586, SLES9-x86_64, and OES Linux. Presumably the process is similar for other architectures of SLES9, and it should also work for SUSE Linux and Novell Linux Desktop.
First, some assumptions:
First: Our ZLM server is on the local network, and is called zlmserver.company.com.
Second: We have access to the ZLM installation CD, or have a ZLM agent install CD.
Third: We have installed our SLES9 servers using the latest service pack CD set.
In order to use ZLM to deploy updates to SLES9 and OES servers, we need to have catalogs configured on the ZLM server to get the updates from. How to do that was covered in my previous article. If you followed that article, you would have ZLM catalogs configured for OES Linux, SLES9-i586 and SLES9-x86_64. Once that is done, the ZLM agent must be installed on managed servers, and the managed servers must be registered with the ZLM server. Finally, the catalogs for each patch channel must be assigned to each managed server, either directly or via a group or container in the ZLM management interface.
Installing the ZLM Agent
You must be logged into the server you want to manage with ZLM as the root user. Note that the agent installer requires that Python is installed. If you do not have Python installed, use YAST to install it from your original installation media. Insert the ZLM Installation CD or a ZLM agent Installation CD into the server. At the shell prompt, change to the directory of the CD, and type ./zlm-install -a This will start the agent install. Below is an example of the prompts you will see.
./zlm-install -a Do you want to install ZENworks now? [Y/n]:Y (License agreement is displayed) Do you accept? [Y/n]:Y Installing RCD Installing package: rcd |##############################| 100% Starting RCD with --no-remote --no-modules --no-services Installing Component: ZENworks Agent Installing ZENworks Agent |##############################| 100% Registration Server Address. (Leave blank for none): zlmserver.company.com Registration Server Key. (Leave blank for none):
You can verify that registration was successful by running ‘/opt/novell/ZENworks/bin/rug sl‘ and looking for a ZENworks service.
Once this has been completed, your server should appear in the Devices tab of the ZENworks management interface. Find the device in the service tab and click on it to view its detailed information. One of the fields displayed should be called “Effective Catalogs”. Click the Advanced button on the “Effective Catalogs” to see the Effective Catalogs Editing screen. Choose Add to get to the “Catalogs to be Assigned” screen, and click Add again to select the appropriate catalog. If you followed my previous article, you will have containers called oes_linux, sles9_32 and sles9_64. Choose the container appropriate to the server you are trying to manage. In the container, a catalog should exist for the OS and architecture you want to manage. Select it and click OK, followed by Next, Next, Finish. Now the appropriate catalog should be shown as an Effective catalog for your server.
For reference, the catalog names created automatically when you mirror OES LInux, SLES9-i586, and SLES9-x86_64, are called “oes”, “sles-9-i586″ and “sles-9-x86_64″ respectively.
Once you have assigned the appropriate catalog to each of the servers you wish to manage, the rest of the work is done on the client side. Make sure your mirrored catalogs are up to date, by following the mirroring instructions from my previous article. Then, login as root on the server you wish to patch and use the command-line utility, rug, to update the system.
First type “rug sl” to determine that the server has been correctly registered, as follows.
exampleserver:~ # rug sl # | Status | Type | Name | URI --+--------+----------+---------------------------+------------------------------ 1 | Active | ZENworks | ZENworks Linux Management | https://zlmserver.company.com
This shows that the server has been correctly registered to the management server.
Next, subscribe to the appropriate catalog as follows:
For OES Linux,
exampleserver:~ # rug sub oes Subscribed to 'oes'
exampleserver:~ # rug sub sles-9-i586 Subscribed to 'sles-9-i586'
exampleserver:~ # rug sub sles-9-x86_64 Subscribed to 'sles-9-x86_64'
Once you have subscribed to the appropriate catalog, run “rug refresh” to make rug aware of any available patches, and then run “rug up” to install all available patches.
exampleserver:~ # rug refresh Refreshing Services... Successfully refreshed.
exampleserver:~ # rug up The following packages will be upgraded: procmail 3.22-39.4 (system) -> procmail 3.22-39.7 (oes) sax2 4.8-103.28 (system) -> sax2 4.8-103.33 (oes) Proceed with transaction? (y/N) Y Downloading Packages... Installing/Removing Packages... Transaction Finished
The command “rug up -y” may be added to a shell script and run from cron if you wish to automatically install any updates that are available on your ZLM server. If you prefer a more manual approach, you can periodically run “rug up” as root to get things updated. You can also check what patches are available from your catalog by running “rug list-updates”. Rug has other interesting features which are listed by just typing rug with no arguments, and which are explained in detail in the rug manpage.