Este contenido no está disponible en el idioma seleccionado.
Chapter 7. Setting placement group auto-scaling modes
Each pool in the Red Hat Ceph Storage cluster has a pg_autoscale_mode
property for PGs that you can set to off
, on
, or warn
.
-
off
: Disables auto-scaling for the pool. It is up to the administrator to choose an appropriate PG number for each pool. Refer to the Placement group count section for more information. -
on
: Enables automated adjustments of the PG count for the given pool. -
warn
: Raises health alerts when the PG count needs adjustment.
In Red Hat Ceph Storage 5 and later releases, pg_autoscale_mode
is on
by default. Upgraded storage clusters retain the existing pg_autoscale_mode
setting. The pg_auto_scale
mode is on
for the newly created pools. PG count is automatically adjusted, and ceph status
might display a recovering state during PG count adjustment.
The autoscaler uses the bulk
flag to determine which pool should start with a full complement of PGs and only scales down when the usage ratio across the pool is not even. However, if the pool does not have the bulk
flag, the pool starts with minimal PGs and only when there is more usage in the pool.
The autoscaler identifies any overlapping roots and prevents the pools with such roots from scaling because overlapping roots can cause problems with the scaling process.
Procedure
Enable auto-scaling on an existing pool:
Syntax
ceph osd pool set POOL_NAME pg_autoscale_mode on
ceph osd pool set POOL_NAME pg_autoscale_mode on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph osd pool set testpool pg_autoscale_mode on
[ceph: root@host01 /]# ceph osd pool set testpool pg_autoscale_mode on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable auto-scaling on a newly created pool:
Syntax
ceph config set global osd_pool_default_pg_autoscale_mode MODE
ceph config set global osd_pool_default_pg_autoscale_mode MODE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph config set global osd_pool_default_pg_autoscale_mode on
[ceph: root@host01 /]# ceph config set global osd_pool_default_pg_autoscale_mode on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a pool with the
bulk
flag:Syntax
ceph osd pool create POOL_NAME --bulk
ceph osd pool create POOL_NAME --bulk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph osd pool create testpool --bulk
[ceph: root@host01 /]# ceph osd pool create testpool --bulk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set or unset the
bulk
flag for an existing pool:ImportantThe values must be written as
true
,false
,1
, or0
.1
is equivalent totrue
and0
is equivalent tofalse
. If written with different capitalization, or with other content, an error is emitted.The following is an example of the command written with the wrong syntax:
[ceph: root@host01 /]# ceph osd pool set ec_pool_overwrite bulk True Error EINVAL: expecting value 'true', 'false', '0', or '1'
[ceph: root@host01 /]# ceph osd pool set ec_pool_overwrite bulk True Error EINVAL: expecting value 'true', 'false', '0', or '1'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Syntax
ceph osd pool set POOL_NAME bulk true/false/1/0
ceph osd pool set POOL_NAME bulk true/false/1/0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph osd pool set testpool bulk true
[ceph: root@host01 /]# ceph osd pool set testpool bulk true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
bulk
flag of an existing pool:Syntax
ceph osd pool get POOL_NAME bulk
ceph osd pool get POOL_NAME bulk
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph osd pool get testpool bulk bulk: true
[ceph: root@host01 /]# ceph osd pool get testpool bulk bulk: true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow