Getting errors while trying to activate a shared LVM volume group.
This document (000019663) is provided subject to the disclaimer at the end of this document.
SUSE Linux Enterprise High Availability Extension 15
Errors included "unknown error", "not configured", or "not installed".
LVM-activate(vg-shared): ERROR: Volume group[vg-shared] doesn't exist, or not visible on this node! pacemaker-schedulerd: warning: Processing failed start of vg-shared-activate on sle15-ha1: unknown error pacemaker-schedulerd: warning: Processing failed monitor of vg-shared-activate on sle15-ha1: not running pacemaker-schedulerd: warning: Processing failed start of vg-shared on sle15-ha3: not configured pacemaker-schedulerd: error: Preventing vg-shared from re-starting anywhere: operation start failed 'not configured' (6) sle15-ha1 pacemaker-schedulerd: notice: Preventing c-lvm2 from re-starting on sle15-ha1: operation start failed 'not installed' (5) sle15-ha1 LVM-activate(vg-shared): ERROR: vg-shared: failed to activate.
volume_list = [ "root-vg", "vg-shared" ]Normally the "volume_list" is not necessary and you may try commenting it out and cleaning up the failed cluster action which will attempt to start the resource again. If you are still seeing errors, then please run through the steps outlined below.
Cluster Logical Volume Manager (Cluster LVM)
Configuration of Cluster LVM
How to properly activate a shared LVM volume using lvmlockd on SLES15.
Initial setup in a cluster for using lvmlockd and activating LVM volume group in "shared" mode.
This allows the LVM VG to be active on all nodes of the cluster and typically used in conjunction with shared /clustered file systems like OCFS2 or GFS2
1. Configure hosts to use lvmlockd.
locking_type = 1 use_lvmlockd = 1 use_lvmetad = 1* If you use the "volume_list [ ]", it must include all local and clustered volume groups.
Example: volume_list = [ "root-vg", "vg-shared" ]2. Setup a base-clone group which includes "dlm" and "lvmlockd" primitives in the cluster and start.
Sample Cluster configuration: primitive dlm ocf:pacemaker:controld \ op start timeout=90s interval=0 \ op stop timeout=100s interval=0 primitive lvmlockd lvmlockd \ op start timeout=90s interval=0 \ op stop timeout=100s interval=0 group base-group dlm lvmlockd clone base-clone base-group \ meta interleave=true ordered=true target-role=Started3. Create shared VG on shared devices.
# vgcreate --shared <vgname> <devices>4. Create cluster primitive (LVM-activate) to activate. Use activation_mode=shared (defaults: exclusive )
primitive vg-shared LVM-activate \ params vgname=vg-shared vg_access_mode=lvmlockd activation_mode=shared \ op stop timeout=60s interval=0 \ op start timeout=60s interval=05. Setup order constraints in the cluster to make sure the base-clone group starts before the LVM-activate resource and other primitives that might depend on it.
Note: cluster groups have built in "order" and "colocation" so using groups can be helpful to keep resources together and keep proper order.
Differences between lvmlockd (SLES15) and clvm (SLES12)
(Reference: man lvmlockd ) for a complete list.
· lvm.conf must be configured to use either lvmlockd (use_lvmlockd=1) or clvmd (locking_type=3), but not both. · vgcreate --shared creates a shared VG, and vgcreate --clustered y creates a clvm/clustered VG. · lvmlockd defaults to the exclusive activation mode whenever the activation mode is unspecified, i.e. -ay means -aey, not -asy. · lvmlockd works with lvmetad. · In the 'vgs' command's sixth VG attr field, "s" for "shared" is displayed for shared VGs. VG #PV #LV #SN Attr VSize VFree vg-shared 1 1 0 wz--ns 4.00g 96.00m
This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
- Document ID:000019663
- Creation Date: 02-Jul-2020
- Modified Date:05-Jul-2020
- SUSE Linux Enterprise High Availability Extension
For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com