SUSE Support

Here When You Need Us

With the pg_autoscaler Manager module enabled the cluster shows a "many more objects per pg than average" warning

This document (000020361) is provided subject to the disclaimer at the end of this document.


SUSE Enterprise Storage 7
SUSE Enterprise Storage 6


The cluster is configured with RBD (Rados Block Device) images using an EC (Erasure Coded) data pool and thus a replicated metadata pool.

The cluster health status reports a warning similar to the following example:
MANY_OBJECTS_PER_PG <$X> pools have many more objects per pg than average
    pool <$pool_name> objects per pg (209000) is more than 10.7800 times cluster average (19400)


Consider setting the "target_size_ratio" for the metadata pools to 0 using the command:
ceph osd pool set <$ins_metadatapool_name> target_size_ratio 0


The metadata pools were configured with the same "target_size_ratio" as the EC data pools.

Additional Information

Since the metadata pools have the same ratio set as the data pools, the pg_autoscaler MGR (Manager) module expects all data for the cluster to be evenly distributed across also the metadata pools and assigns a high PG (Placement Group) count also for the metadata pools. Since these metadata pools however normally will contain very little actual data and objects when compared to the data pools, this results in the health warning.

The current pg_autoscaler values for the pools, including the current ratios, can be seen with:
ceph osd pool autoscale-status
To first test what will happen without any actual changes occurring, set one of the metadata pools to "warn" mode:
ceph osd pool set <$ins_metadatapool_name> pg_autoscale_mode warn

When now adjusting the "target_size_ratio" for this metadata pool, the new PG count that the pool will get if the mode is set to "on" will be shown with the status command under the "NEW PG_NUM" column heading. If the value looks sane (32 PGs for example), the setting can be implemented by setting the autoscale mode back to "on".

For additional information regarding the pg_autoscaler MGR module, see the SES Online documentation and also the upstream Ceph Documentation.


This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

  • Document ID:000020361
  • Creation Date: 10-Aug-2021
  • Modified Date:10-Aug-2021
    • SUSE Enterprise Storage

< Back to Support Search

For questions or concerns with the SUSE Knowledgebase please contact: tidfeedback[at]

SUSE Support Forums

Get your questions answered by experienced Sys Ops or interact with other SUSE community experts.

Support Resources

Learn how to get the most from the technical support you receive with your SUSE Subscription, Premium Support, Academic Program, or Partner Program.

Open an Incident

Open an incident with SUSE Technical Support, manage your subscriptions, download patches, or manage user access.