SUSE Support

Here When You Need Us

Ceph health objects per Placement Group warning

This document (7018414) is provided subject to the disclaimer at the end of this document.

Environment

SUSE Enterprise Storage 3
SUSE Enterprise Storage 4

Situation

The cluster status commands for example "ceph status" shows the following warning:

HEALTH_WARN pool $pool_name has too few pgs
pool $pool_name objects per pg (XX) is more than XX times cluster average (1)

Resolution

Verify the proper amount of Placement Groups (PGs) are configured according to the amount of OSDs and average expected percent data usage of each pool. Depending on the cluster configuration and after verifying the amount of PGs as currently configured for the existing pools:

1. If it is determined that the current configured PGs per pool is set optimally, the warning can be ignored.
2. Alternatively increase the max object skew warning using:

ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew XX'

The default setting is 10.

3. Increase the configured amount of PGs for the volumes if it is determined that this value is not set optimally. Caution should be taken here as currently the PGs can only be increased but not decreased and an non-optimal setting can have a negative impact on cluster performance.

Since determining the correct number of PGs per volume is non-trivial please see the upstream Ceph documentation for more information.

Cause

Due to differences in the data usage of the existing pools this can result in the warnings being seen.

Additional Information

To verify the data usage of configured pools and the amount of objects per pool execute "rados df", which will return something like the following:

:~ # rados df
pool name     KB        objects    clones    degraded    unfound    rd       rd KB    wr      wr KB
cache_ssd     16534     143        5         0           0          3450282  3473477  111101  46530918
data_spin     46622555  11452      69        0           0          140      479      11852   47714909

total used    70964936  11595
total avail   154320928
total space   225285864

To see the current configuration settings for the monitor (MON) query the daemon via the admin socket by executing on the node hosting the MON:

ceph daemon /var/run/ceph/ceph/ceph-mon.$ceph_node.asok config show

Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:7018414
  • Creation Date: 22-Dec-2016
  • Modified Date:03-Mar-2020
    • SUSE Enterprise Storage

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]suse.com

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.