SUSE Conversations


Novell OES Clustering with NSS on vSphere 4.x



By: paulsenj

June 1, 2012 1:02 pm

Reads:646

Comments:5

Rating:0

By: Victor Gehring, CNE, CCDA, ITIL, VTSP
Updated: 5/24/2012

For those IT shops wanting to enjoy the advantages of Novell clustering in their VMware environments, this article pulls together information from various sources, along with personal experience, for what is intended be a complete configuration guide in terms of the platform discussed. The target audience is assumed to have a working knowledge of SLES/OES, SAN’s and VMware.

This article will likely apply to you if you are running SLES10.x/OES2; vSphere4; fiber channel or iSCSI SAN’s and desire a CAB (cluster across boxes) architecture in a production environment. SLES11/OES 11 clustering should also work the same using this architecture, but the examples used herein will be based on SLES10/OES2. Using VMware to create a CIB (Cluster In a Box) also works, but isn’t generally recommended for high-availability applications.

There are some things you need to consider before heading down this path to determine if this solution will work for your needs. Currently, VMware allows you to create two types of storage disks, VMware Virtual Disks and RDM (Raw Device Mapped) disks. This article requires BOTH disk types to create a stable solution. As such, be aware that VMware has a 2TB less 512k limit on disk sizes in vSphere 4x. Also understand that VMware only allows one virtual machine to host an RDM disk per physical vSphere host. While both disks will be configured for sharing, the traditional VMware Virtual Disk will only be needed to store the RDM configuration file(s), and therefore does not require a large disk. The RDM disk is intended to be used for general data storage. The SLES VM’s can either be running in the VMware virtual disk or if you are using physical or blade servers, on the local disk storage, but typically not in the RDM storage area.

Finally, allow me to apologize in advance for redacting the screenshots to mask the identity of the systems. This was the most expeditious way of providing illustrations without compromising client information.

Click to view.

Figure 1. Topology

1. LUN Configuration:

This article assumes that you or your SAN administrator will configure the two dedicated LUNs for presentation to your vSphere hosts. This LUN configuration is typically performed using the SAN manufacturer’s disk array management software. Recall from above that one LUN will be used for a VMware Virtual Disk and the other for the RDM disk. From the host’s Configuration tab, navigate to the storage adapters to verify and/or rescan for the new LUNs and make note of which LUN is provisioned for which disk type.

2. VMware Virtual Disk Configuration:

Click to view.

Figure 2. Node 2 shared VMDK on LUN 1

Make sure SCSI bus sharing for SCSI Controller 1 is set to physical. Again, this disk is simply setup just like any typical SAN disk resource that provides shared access to VM’s.

3. VMware RDM Disk Configuration:

Click to view.

Figure 3. Node 1 RDM disk on LUN 2.

When configuring this disk, it is critical to make sure you add a new dedicated SCSI bus to the VM’s configuration. To function properly, the SCSI bus hosting the RDM disk should not be hosting any other disks. Also be sure to note the SCSI bus address of the added hard drive (ex. 2:1). When completing the disk add wizard, be sure to “Save Configuration” to the SHARED virtual disk created in #2 (do NOT store with VM) by using the browse function. Don’t worry about setting up a folder ahead of time to store the RDM map file in the shared virtual disk since VMware will create its own – all you need to do is browse to the root of the disk.

4. NSSMU Configuration:

Click to view.

Figure 4. NSSMU showing LUN 2 data partition enabled for sharing.

Click to view.

Figure 5. NSSMU showing sdc partition detail.

Depending on the state of your OES2 cluster configuration, your SBD partition may or may not have been created. If you have already installed the cluster option from the OES2 installation and were not able to get the SBD setup, one way to complete that is to use the sbdutil command line utility. Please refer to the Novell documentation and/or command line help for usage instructions. As shown in Figure 5, it is important to understand that the SBD partition is contained within the RDM disk on LUN 2.

5. Create Master Cluster Node:

If you have already installed the clustering feature on the master node via the OES2 installation process, you may need to reconfigure it following the SAN and VMware disk configuration effort. If the OES2 installer does not allow reconfiguration of the clustering feature stating that it is already configured, you will likely need to run yast ncs and select Yes to reconfigure NCS. Again, please refer to the Novell documentation for assistance with completing the NCS wizard setup questions. Upon successful configuration of the master node, you should be able to run iManager and view the cluster status where both the master node and the master IP resource objects should be viewable, online and ready for user access.

6. Create 2nd Cluster Node:

Click to view.

Figure 6. Node 2 RDM disk added.

This step assumes you have already installed the 2nd SLES Node and now need to join it to Node 1 to complete the cluster. So the next step is to add the RDM hard disk to Node 2. When doing so, make sure to add a dedicated SCSI bus having the SAME addressing as in the RDM disk created in step #3. (Ex. 2:0) Then use the Add hard drive using the Existing disk option.

Browse to the shared virtual disk from step #3 and locate the RDM map file. Make sure to assign the hard drive the same SCSI address from step #3 (Ex. 2:1)

Execute the OES2 Install and Configuration, add or reconfigure NCS, and select the “add node to an existing cluster” option. While going thru the wizard should be rather straightforward, please refer to the Novell cluster installation documentation for assistance with this configuration if necessary. When completed, you should then be able to go into iManager and see the 2nd Node has now joined the cluster. You should also be able to use iManager to fail-over and fail-back the nodes seamlessly. There are also command line utilities you can use to as well. Enter cluster –help at a prompt to view available choices.

So before turning your users loose to access this new resource, please heed these cautionary caveats. Please be sure to keep your SLES/OES servers patching current via Novell’s auto-updater and service pack application processes. Just as important, be sure to deploy the most current Novell Client relative to the platform your shop is running. Particularly in this environment, it is still one of the best ways to avoid trouble.

Why Choose This Topology?

You may be wondering why this LUN arrangement is used. Given VMware’s 2TB limit, why wouldn’t you simply use one shared virtual disk? While you can successfully configure this, I have found in practice that particularly with larger disks (<400GB) that it will not be stable. Users will lose connectivity to drives mapped to the SAN disk resource. It may only happen occasionally, but it will be enough to drive you nuts. So what about just one RDM disk? This won’t work because after the 1st/master node grabs the LUN to create the RDM, it will no longer be accessible by other nodes since you cannot store the RDM mapping file within the RDM disk. This is why a 2nd LUN needs to be created and setup as a VMware shared virtual disk for RDM map file storage, so that all cluster nodes can access the mapping file. Additionally, wrapping your arms around this method will likely be useful in the event you are tasked with installing a Windows server cluster, as I have seen other blogs and articles that discuss this same approach. One of the main differences is that the Windows terminology refers to the two LUN’s as a “quorum” and “data” LUN, where the quorum LUN is made available to all cluster nodes to control ownership and access of the cluster data. Finally, it’s also nice to know (at least when this article was written – April 2012) that Novell allows you to create a two-node SLES cluster without incurring any extra license fees.

Your comments on this article are invited: victor.gehring@pcn-inc.com

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Tags: ,
Categories: Open Enterprise Server on SLES, SUSE Linux Enterprise Server, Technical Solutions

Disclaimer: As with everything else at SUSE Conversations, this content is definitely not supported by SUSE (so don't even think of calling Support if you try something and it blows up).  It was contributed by a community member and is published "as is." It seems to have worked for at least one person, and might work for you. But please be sure to test, test, test before you do anything drastic with it.

5 Comments

  1. By:robpet

    First excellent article.

    Regarding “. Also understand that VMware only allows one virtual machine to host an RDM disk per physical vSphere host. ”

    Do I understand this correctly if my host has 25 vm’s on it only 1 of them can have RDM disks?

    Best
    Robert Pettersson

  2. By:skapanen

    more like one RDM per LUN per Host, so you can have RDMs on many VM’s but not to the same LUN.

    Also, if you run separate vSphere hosts, they can each map one RDM to the same LUN;
    ie. we have three vSphere servers, each run one node of the NCS cluster and have normal/independent RDM mapping to the LUNs.. not using existing mappings.

  3. By:vgehring

    Thank you for the compliment robpet – I appreciate it!

    skapanen is correct regarding the single LUN per physical host. If your shop is like a lot of others where more storage SAN-connected disk space is needed, you ought to check out vSphere5′s new storage capabilites. For instance, you can now have RDM storage LUN’s greater than 2TB in size. There is also a new feature known as “DRS/datastore clusters” that improve the provisioning and management of larger datasets. See VMware’s “What’s New in vS5 Storage” whitepaper on their site for more info.

  4. By:bkeech

    Excellent article indeed! Wish this had existed before we set up our 4-node OES2 cluster on VMWare 4.1 back in 2011.

    We have done it without the shared access VMFS, and while we have been very stable, it’s hard to do anything related to snapshots especially on the node that’s holding all the cluster definitions for everyone else …

Comment

RSS