ceph.stage.deploy hanging

This document (7023193) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Enterprise Storage 5

Situation

After a disk failure the new disk cannot be deployed because 'deepsea stage run ceph.stage.deploy' hangs.
Command 'salt '*' cephprocesses.check results=True' shows no OSDs on the affected node despite multiple OSD processes running there.

Resolution

1. Remove an empty OSD directory from /var/lib/ceph/osd/ceph-$OSD on minion.

2. Remove OSD details and partitions from /etc/salt/grains on minion.

3. Run ceph.stage.deploy again.

Cause

Issue is caused by leftovers on minion node after failed OSD removal procedure. Engineering is informed and process of OSD removal is reworked to avoid similar situation in the future.

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7023193
  • Creation Date: 23-Jul-2018
  • Modified Date:03-Mar-2020
    • SUSE Enterprise Storage

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Join Our Community

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.


SUSE Customer Support Quick Reference Guide SUSE Technical Support Handbook Update Advisories
Support FAQ

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.

Go to Customer Center