My Favorites

Close

Please to see your favorites.

  • Bookmark
  • Email Document
  • Printer Friendly
  • Favorite
  • Rating:

mount.ocfs2: Cluster stack is invalid while trying to join the group

This document (7018352) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 11 Service Pack 3 (SLES 11 SP3)
SUSE Linux Enterprise Server 11 Service Pack 4 (SLES 11 SP4)
SUSE Linux Enterprise Server 12
SUSE Linux Enterprise Server 12 Service Pack 1 (SLES 12 SP1)
SUSE Linux Enterprise High Availability Extension 12
SUSE Linux Enterprise High Availability Extension 11 Service Pack 4
SUSE Linux Enterprise High Availability Extension 11 Service Pack 3

Situation

This error is seen in /var/log/messages when trying to mount an ocfs2 filesystem:-

                mount.ocfs2: Cluster stack is invalid while trying to join the group

e.g.   lrmd[65808]:   notice: operation_finished: ocfs2-1_start_0:66483:stderr [ mount.ocfs2: Cluster stack is invalid while trying to join the group  ]

Resolution

Whilst the dlm and o2cb cluster resources are running/active, try to modify and re-write the ocfs2 filesystem meta-data.
If it fails, then delete and recreate the ocfs2 filesystem instead.

If you can not keep the dlm and o2cb resources running because of a defined filesystem resource continually failing, then temporarily block the filesystem resource (e.g. set it to stopped) or temporarily remove the filesystem resource so that you can get the dlm and o2cb resources running.
Then while they are active recreate the ocfs2 filesystem and finally undo your temporary measures.
Afterwards the ocfs2 filesystem should mount.

For information on creating and managing ocfs2 resources, refer to the 'Storage and Data Replication - OCFS2' section of the 'High Availability Guide'.

Cause

ocfs2 filesystem was created or filesystem control meta-data modified without a cluster dlm/o2cb resource being active at that time.



Additional Information

In earlier versions of SLES (prior to 11), although ocfs2 was only supported in a cluster environment with a STONITH device, ocfs2 and it's support tools could still be installed without any clustering elements having been previously installed/activated at the time of ocfs2 filesystem creation and mounting.

In SLES 11 and later, ocfs2 functionality was moved under the High Availability Extension product (HAE) and as such can only be used once HAE is installed.

As it is often useful to be able to compare a problem cluster cib contents to a working example, here is the ocfs2 clone resource from a working cluster as seen in cib.xml (in this case it is ocfs2 over iSCSI):-


      <clone id="base-clone">
        <meta_attributes id="base-clone-meta_attributes">
          <nvpair id="base-clone-meta_attributes-interleave" name="interleave" value="true"/>
          <nvpair id="base-clone-meta_attributes-target-role" name="target-role" value="Started"/>
        </meta_attributes>
        <group id="base-group">
          <primitive class="ocf" id="dlm" provider="pacemaker" type="controld">
            <meta_attributes id="dlm-meta_attributes"/>
            <operations id="dlm-operations">
              <op id="dlm-op-monitor-10" interval="10" name="monitor" start-delay="0" timeout="20"/>
            </operations>
          </primitive>
          <primitive class="ocf" id="o2cb" provider="ocfs2" type="o2cb">
            <meta_attributes id="o2cb-meta_attributes"/>
            <operations id="o2cb-operations">
              <op id="o2cb-op-monitor-10" interval="10" name="monitor" timeout="20"/>
            </operations>
          </primitive>
          <primitive class="ocf" id="ocfs2-1" provider="heartbeat" type="Filesystem">
            <instance_attributes id="ocfs2-1-instance_attributes">
              <nvpair id="ocfs2-1-instance_attributes-device" name="device" value="/dev/disk/by-path/ip-192.200.2.135:3260-iscsi-iqn.2016-12.com.suse:ocfs2vol-lun-0"/>
              <nvpair id="ocfs2-1-instance_attributes-directory" name="directory" value="/mnt/ocfs2vol"/>
              <nvpair id="ocfs2-1-instance_attributes-fstype" name="fstype" value="ocfs2"/>
              <nvpair id="ocfs2-1-instance_attributes-options" name="options" value="acl"/>
            </instance_attributes>
            <operations>
              <op id="ocfs2-1-monitor-20" interval="20" name="monitor" timeout="40"/>
            </operations>
            <meta_attributes id="ocfs2-1-meta_attributes"/>
          </primitive>
          <meta_attributes id="base-group-meta_attributes"/>
        </group>
      </clone>

Disclaimer

This Support Knowledgebase provides a valuable tool for NetIQ/Novell/SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7018352
  • Creation Date:03-DEC-16
  • Modified Date:04-JAN-17
    • SUSESUSE Linux Enterprise High Availability Extension
      SUSE Linux Enterprise Server
< Back to Support Search

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Join Our Community

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.


SUSE Customer Support Quick Reference Guide SUSE Technical Support Handbook Update Advisories
Support FAQ

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.

Go to Customer Center