All LVM volume group logical volumes are missing with inconsistent metadata error

This document (000020552) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Linux Enterprise Server 15 all Service Packs

Situation

A disk was removed from an LVM volume group in an effort to reduce the size of the volume group and retire the disk. After removing the disk, the server failed to boot properly with the following messages:
A start job is running for /dev/data/vol1
The server dropped to emergency mode requiring root's password. The journalctl -xb command in emergency mode reports:
systemd[1]: dev-data-vol1.device: Job dev-data-vol1.device/start timed out.
sytsemd[1]: Timed out waiting for device /dev/data/vol1.
The disk was added back to the server and the /dev/data/vol1 mount point was commented out of /etc/fstab so the server would boot.

The following was observed:
ls-lvm:~ # ls -l /dev/data/vol1
ls: cannot access '/dev/data/vol1': No such file or directory

#==[ Command ]======================================#
# /sbin/vgs
  WARNING: ignoring metadata seqno 2 on /dev/sda for seqno 4 on /dev/sdc for VG data.
  WARNING: Inconsistent metadata found for VG data
  WARNING: outdated PV /dev/sda seqno 2 has been removed in current VG data seqno 4.
  VG     #PV #LV #SN Attr   VSize   VFree
  data     2   0   0 wz--n- 304.00m 304.00m
  system   1   2   0 wz--n-   3.99g      0
No logical volumes (#LV) are listed for the data volume group.
ls-lvm:~ # vgcfgrestore data --list
   
  File:        /etc/lvm/archive/data_00000-2104882251.vg
  VG name:        data
  Description:    Created *before* executing 'vgcreate data /dev/sdb /dev/sdc /dev/sdd'
  Backup Time:    Mon Dec  6 06:48:36 2021

   
  File:        /etc/lvm/archive/data_00001-878565405.vg
  VG name:        data
  Description:    Created *before* executing 'lvcreate -n vol1 -l+100%FREE data'
  Backup Time:    Mon Dec  6 06:48:48 2021

   
  File:        /etc/lvm/archive/data_00002-1140679954.vg
  VG name:        data
  Description:    Created *before* executing 'vgreduce data --removemissing --force'
  Backup Time:    Mon Dec  6 06:53:25 2021

   
  File:        /etc/lvm/backup/data
  VG name:        data
  Description:    Created *after* executing 'vgreduce data --removemissing --force'
  Backup Time:    Mon Dec  6 06:53:25 2021

Resolution

Since the physical disk was removed from the server, it still had the LVM metadata on it, including the UUID. Put the disk back in the server to restore the LVM volume. If the disk was reprovisioned and unavailable, an empty disk could be used, but the filesystem would be compromised. Once the physical disk has been restored to the server, restore the volume group from an LVM backup file. Finally, remove the disk properly as shown in Additional Information.
ls-lvm:~ # cp /etc/lvm/archive/data_00002-1140679954.vg /etc/lvm/archive/data-fixed
ls-lvm:~ # vi /etc/lvm/archive/data-fixed
From the section:
pv1 {
        id = "dRYmFZ-1aGC-9YvY-yhKu-eSn8-KeZO-eFnvtQ"
        device = "[unknown]"    # Hint only

        status = ["ALLOCATABLE"]
        flags = ["MISSING"]
        dev_size = 314573       # 153.6 Megabytes
        pe_start = 2048
        pe_count = 38   # 152 Megabytes
}
Remove flags = ["MISSING"]
ls-lvm:~ # vgcfgrestore data --file /etc/lvm/archive/data-fixed 
ls-lvm:~ # vgchange -ay
ls-lvm:~ # e2fsck -f /dev/data/vol1
ls-lvm:~ # vi /etc/fstab
Uncomment the /dev/data/vol1 device so it mounts automatically at boot time.
ls-lvm:~ # mount -a
ls-lvm:~ # df -h /data
ls-lvm:~ # reboot
Reboot the server to ensure the server boots normally as expected.
 

Cause

A physical LVM drive was removed from the server and then vgreduce data --removemissing was incorrectly used to remove the missing drive from the volume group. This resulted in all logical volumes being deleted from the volume group.
 

Additional Information

The /dev/sdc device needs to be retired in this example. The proper way to remove the physical disk associated with an LVM volume group follows:
ls-lvm:~ # df -h /data
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/data-vol1  434M  2.3M  405M   1% /data

ls-lvm:~ # umount /data

ls-lvm:~ # pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               data
  PV Size               153.60 MiB / not usable 1.60 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              38
  Free PE               0
  Allocated PE          38
  PV UUID               v00Q8L-1jvJ-NEl2-S9wt-Mg3D-juYe-M0EfuY

  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               data
  PV Size               153.60 MiB / not usable 1.60 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              38
  Free PE               0
  Allocated PE          38
  PV UUID               dRYmFZ-1aGC-9YvY-yhKu-eSn8-KeZO-eFnvtQ

  --- Physical volume ---
  PV Name               /dev/sdd
  VG Name               data
  PV Size               153.60 MiB / not usable 1.60 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              38
  Free PE               0
  Allocated PE          38
  PV UUID               0fyAtf-22Gh-B7c1-nKYU-BC47-I0pH-NAxzuF

Note the physical extent size (PE) to reduce.

ls-lvm:~ # pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  system lvm2 a--    3.99g    0 
  /dev/sdb   data   lvm2 a--  152.00m    0 
  /dev/sdc   data   lvm2 a--  152.00m    0 
  /dev/sdd   data   lvm2 a--  152.00m    0 

ls-lvm:~ # lvresize --resizefs -l-38 data/vol1
fsck from util-linux 2.36.2
/dev/mapper/data-vol1: 11/116736 files (0.0% non-contiguous), 25262/466944 blocks
resize2fs 1.43.8 (1-Jan-2018)
Resizing the filesystem on /dev/mapper/data-vol1 to 311296 (1k) blocks.
The filesystem on /dev/mapper/data-vol1 is now 311296 (1k) blocks long.

  Size of logical volume data/vol1 changed from 456.00 MiB (114 extents) to 304.00 MiB (76 extents).
  Logical volume data/vol1 successfully resized.

ls-lvm:~ # pvmove /dev/sdc
  /dev/sdc: Moved: 0.00%
  /dev/sdc: Moved: 100.00%

ls-lvm:~ # pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  system lvm2 a--    3.99g      0
  /dev/sdb   data   lvm2 a--  152.00m      0
  /dev/sdc   data   lvm2 a--  152.00m 152.00m
  /dev/sdd   data   lvm2 a--  152.00m      0

ls-lvm:~ # vgreduce data /dev/sdc
  Removed "/dev/sdc" from volume group "data"

ls-lvm:~ # pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  system lvm2 a--    3.99g      0
  /dev/sdb   data   lvm2 a--  152.00m      0
  /dev/sdc          lvm2 ---  153.60m 153.60m
  /dev/sdd   data   lvm2 a--  152.00m      0

ls-lvm:~ # pvremove /dev/sdc
  Labels on physical volume "/dev/sdc" successfully wiped.

ls-lvm:~ # pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  system lvm2 a--    3.99g    0
  /dev/sdb   data   lvm2 a--  152.00m    0
  /dev/sdd   data   lvm2 a--  152.00m    0

ls-lvm:~ # fsck -f /dev/data/vol1
fsck from util-linux 2.36.2
e2fsck 1.43.8 (1-Jan-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/data-vol1: 11/77824 files (0.0% non-contiguous), 20091/311296 blocks

ls-lvm:~ # mount /data

ls-lvm:~ # df -h /data
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/data-vol1  287M  2.1M  266M   1% /data
The /dev/sdc device can now be removed from the server.
 

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000020552
  • Creation Date: 20-Jan-2022
  • Modified Date:20-Jan-2022
    • SUSE Linux Enterprise Server

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback@suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Join Our Community

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.


SUSE Customer Support Quick Reference Guide SUSE Technical Support Handbook Update Advisories
Support FAQ

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.

Go to Customer Center