Chapter 10. Updating noautoscale flag


If you want to enable or disable the autoscaler for all the pools at the same time, you can use the noautoscale global flag. This global flag is useful during upgradation of the storage cluster when some OSDs are bounced or when the cluster is under maintenance. You can set the flag before any activity and unset it once the activity is complete.

By default, the noautoscale flag is set to off. When this flag is set, then all the pools have pg_autoscale_mode as off and all the pools have the autoscaler disabled.

Prerequisites

  • A running Red Hat Ceph Storage cluster
  • Root-level access to all the nodes.

Procedure

  1. Get the value of the noautoscale flag:

    Example

    [ceph: root@host01 /]# ceph osd pool get noautoscale
    Copy to Clipboard Toggle word wrap

  2. Set the noautoscale flag before any activity:

    Example

    [ceph: root@host01 /]# ceph osd pool set noautoscale
    Copy to Clipboard Toggle word wrap

  3. Unset the noautoscale flag on completion of the activity:

    Example

    [ceph: root@host01 /]# ceph osd pool unset noautoscale
    Copy to Clipboard Toggle word wrap

10.1. Specifying target pool size

A newly created pool consumes a small fraction of the total cluster capacity and appears to the system that it will need a small number of PGs. However, in most cases, cluster administrators know which pools are expected to consume most of the system capacity over time. If you provide this information, known as the target size to Red Hat Ceph Storage, such pools can use a more appropriate number of PGs (pg_num) from the beginning. This approach prevents subsequent changes in pg_num and the overhead associated with moving data around when making those adjustments.

You can specify target size of a pool in these ways:

Procedure

  1. Set the target size using the absolute size of the pool in bytes:

    ceph osd pool set pool-name target_size_bytes value
    Copy to Clipboard Toggle word wrap

    For example, to instruct the system that mypool is expected to consume 100T of space:

    $ ceph osd pool set mypool target_size_bytes 100T
    Copy to Clipboard Toggle word wrap

You can also set the target size of a pool at creation time by adding the optional --target-size-bytes <bytes> argument to the ceph osd pool create command.

Procedure

  1. Set the target size using the ratio of the total cluster capacity:

    Syntax

    ceph osd pool set pool-name target_size_ratio ratio
    Copy to Clipboard Toggle word wrap

    For Example:

    [ceph: root@host01 /]# ceph osd pool set mypool target_size_ratio 1.0
    Copy to Clipboard Toggle word wrap

    tells the system that the pool mypool is expected to consume 1.0 relative to the other pools with target_size_ratio set. If mypool is the only pool in the cluster, this means an expected use of 100% of the total capacity. If there is a second pool with target_size_ratio as 1.0, both pools would expect to use 50% of the cluster capacity.

You can also set the target size of a pool at creation time by adding the optional --target-size-ratio <ratio> argument to the ceph osd pool create command.

Note

If you specify impossible target size values, for example, a capacity larger than the total cluster, or ratios that sum to more than 1.0, the cluster raises a POOL_TARGET_SIZE_RATIO_OVERCOMMITTED or POOL_TARGET_SIZE_BYTES_OVERCOMMITTED health warning.

If you specify both target_size_ratio and target_size_bytes for a pool, the cluster considers only the ratio, and raises a POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO health warning.

10.2. Placement group command line interface

The ceph CLI allows you to set and get the number of placement groups for a pool, view the PG map and retrieve PG statistics.

To set the number of placement groups in a pool, you must specify the number of placement groups at the time you create the pool. See Creating a Pool for details. Once you set placement groups for a pool, you can increase the number of placement groups (but you cannot decrease the number of placement groups). To increase the number of placement groups, execute the following:

Syntax

ceph osd pool set POOL_NAME pg_num PG_NUM
Copy to Clipboard Toggle word wrap

Once you increase the number of placement groups, you must also increase the number of placement groups for placement (pgp_num) before your cluster will rebalance. The pgp_num should be equal to the pg_num. To increase the number of placement groups for placement, execute the following:

Syntax

ceph osd pool set POOL_NAME pgp_num PGP_NUM
Copy to Clipboard Toggle word wrap

To get the number of placement groups in a pool, execute the following:

Syntax

ceph osd pool get POOL_NAME pg_num
Copy to Clipboard Toggle word wrap

10.2.3. Getting statistics for placement groups

To get the statistics for the placement groups in your storag cluster, execute the following:

Syntax

ceph pg dump [--format FORMAT]
Copy to Clipboard Toggle word wrap

Valid formats are plain (default) and json.

To get the statistics for all placement groups stuck in a specified state, execute the following:

Syntax

ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} INTERVAL
Copy to Clipboard Toggle word wrap

Inactive Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come up and in.

Unclean Placement groups contain objects that are not replicated the desired number of times. They should be recovering.

Stale Placement groups are in an unknown state - the OSDs that host them have not reported to the monitor cluster in a while (configured by mon_osd_report_timeout).

Valid formats are plain (default) and json. The threshold defines the minimum number of seconds the placement group is stuck before including it in the returned statistics (default 300 seconds).

10.2.5. Getting placement group maps

To get the placement group map for a particular placement group, execute the following:

Syntax

ceph pg map PG_ID
Copy to Clipboard Toggle word wrap

Example

[ceph: root@host01 /]# ceph pg map 1.6c
Copy to Clipboard Toggle word wrap

Ceph returns the placement group map, the placement group, and the OSD status:

osdmap e13 pg 1.6c (1.6c) -> up [1,0] acting [1,0]
Copy to Clipboard Toggle word wrap

10.2.6. Scrubbing placement groups

To scrub a placement group, execute the following:

Syntax

ceph pg scrub PG_ID
Copy to Clipboard Toggle word wrap

Ceph checks the primary and any replica nodes, generates a catalog of all objects in the placement group and compares them to ensure that no objects are missing or mismatched, and their contents are consistent. Assuming the replicas all match, a final semantic sweep ensures that all of the snapshot-related object metadata is consistent. Errors are reported via logs.

10.2.7. Marking unfound objects

If the cluster has lost one or more objects, and you have decided to abandon the search for the lost data, you must mark the unfound objects as lost.

If all possible locations have been queried and objects are still lost, you might have to give up on the lost objects. This is possible given unusual combinations of failures that allow the cluster to learn about writes that were performed before the writes themselves are recovered.

Currently the only supported option is "revert", which will either roll back to a previous version of the object or (if it was a new object) forget about it entirely. To mark the "unfound" objects as "lost", execute the following:

Syntax

ceph pg PG_ID mark_unfound_lost revert|delete
Copy to Clipboard Toggle word wrap

Important

Use this feature with caution, because it might confuse applications that expect the object(s) to exist.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat