Este contenido no está disponible en el idioma seleccionado.
Chapter 10. Updating noautoscale flag
If you want to enable or disable the autoscaler for all the pools at the same time, you can use the noautoscale
global flag. This global flag is useful during upgradation of the storage cluster when some OSDs are bounced or when the cluster is under maintenance. You can set the flag before any activity and unset it once the activity is complete.
By default, the noautoscale
flag is set to off
. When this flag is set, then all the pools have pg_autoscale_mode
as off
and all the pools have the autoscaler disabled.
Prerequisites
- A running Red Hat Ceph Storage cluster
- Root-level access to all the nodes.
Procedure
Get the value of the
noautoscale
flag:Example
[ceph: root@host01 /]# ceph osd pool get noautoscale
[ceph: root@host01 /]# ceph osd pool get noautoscale
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
noautoscale
flag before any activity:Example
[ceph: root@host01 /]# ceph osd pool set noautoscale
[ceph: root@host01 /]# ceph osd pool set noautoscale
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Unset the
noautoscale
flag on completion of the activity:Example
[ceph: root@host01 /]# ceph osd pool unset noautoscale
[ceph: root@host01 /]# ceph osd pool unset noautoscale
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
10.1. Specifying target pool size Copiar enlaceEnlace copiado en el portapapeles!
A newly created pool consumes a small fraction of the total cluster capacity and appears to the system that it will need a small number of PGs. However, in most cases, cluster administrators know which pools are expected to consume most of the system capacity over time. If you provide this information, known as the target size
to Red Hat Ceph Storage, such pools can use a more appropriate number of PGs (pg_num
) from the beginning. This approach prevents subsequent changes in pg_num
and the overhead associated with moving data around when making those adjustments.
You can specify target size
of a pool in these ways:
10.1.1. Specifying target size using the absolute size of the pool Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Set the
target size
using the absolute size of the pool in bytes:ceph osd pool set pool-name target_size_bytes value
ceph osd pool set pool-name target_size_bytes value
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to instruct the system that
mypool
is expected to consume 100T of space:ceph osd pool set mypool target_size_bytes 100T
$ ceph osd pool set mypool target_size_bytes 100T
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can also set the target size of a pool at creation time by adding the optional --target-size-bytes <bytes>
argument to the ceph osd pool create
command.
10.1.2. Specifying target size using the total cluster capacity Copiar enlaceEnlace copiado en el portapapeles!
Procedure
Set the
target size
using the ratio of the total cluster capacity:Syntax
ceph osd pool set pool-name target_size_ratio ratio
ceph osd pool set pool-name target_size_ratio ratio
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Example:
[ceph: root@host01 /]# ceph osd pool set mypool target_size_ratio 1.0
[ceph: root@host01 /]# ceph osd pool set mypool target_size_ratio 1.0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow tells the system that the pool
mypool
is expected to consume 1.0 relative to the other pools withtarget_size_ratio
set. Ifmypool
is the only pool in the cluster, this means an expected use of 100% of the total capacity. If there is a second pool withtarget_size_ratio
as 1.0, both pools would expect to use 50% of the cluster capacity.
You can also set the target size of a pool at creation time by adding the optional --target-size-ratio <ratio>
argument to the ceph osd pool create
command.
If you specify impossible target size values, for example, a capacity larger than the total cluster, or ratios that sum to more than 1.0, the cluster raises a POOL_TARGET_SIZE_RATIO_OVERCOMMITTED
or POOL_TARGET_SIZE_BYTES_OVERCOMMITTED
health warning.
If you specify both target_size_ratio
and target_size_bytes
for a pool, the cluster considers only the ratio, and raises a POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO
health warning.
10.2. Placement group command line interface Copiar enlaceEnlace copiado en el portapapeles!
The ceph
CLI allows you to set and get the number of placement groups for a pool, view the PG map and retrieve PG statistics.
10.2.1. Setting number of placement groups in a pool Copiar enlaceEnlace copiado en el portapapeles!
To set the number of placement groups in a pool, you must specify the number of placement groups at the time you create the pool. See Creating a Pool for details. Once you set placement groups for a pool, you can increase the number of placement groups (but you cannot decrease the number of placement groups). To increase the number of placement groups, execute the following:
Syntax
ceph osd pool set POOL_NAME pg_num PG_NUM
ceph osd pool set POOL_NAME pg_num PG_NUM
Once you increase the number of placement groups, you must also increase the number of placement groups for placement (pgp_num
) before your cluster will rebalance. The pgp_num
should be equal to the pg_num
. To increase the number of placement groups for placement, execute the following:
Syntax
ceph osd pool set POOL_NAME pgp_num PGP_NUM
ceph osd pool set POOL_NAME pgp_num PGP_NUM
10.2.2. Getting number of placement groups in a pool Copiar enlaceEnlace copiado en el portapapeles!
To get the number of placement groups in a pool, execute the following:
Syntax
ceph osd pool get POOL_NAME pg_num
ceph osd pool get POOL_NAME pg_num
10.2.3. Getting statistics for placement groups Copiar enlaceEnlace copiado en el portapapeles!
To get the statistics for the placement groups in your storag cluster, execute the following:
Syntax
ceph pg dump [--format FORMAT]
ceph pg dump [--format FORMAT]
Valid formats are plain
(default) and json
.
10.2.4. Getting statistics for stuck placement groups Copiar enlaceEnlace copiado en el portapapeles!
To get the statistics for all placement groups stuck in a specified state, execute the following:
Syntax
ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} INTERVAL
ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} INTERVAL
Inactive Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come up and in.
Unclean Placement groups contain objects that are not replicated the desired number of times. They should be recovering.
Stale Placement groups are in an unknown state - the OSDs that host them have not reported to the monitor cluster in a while (configured by mon_osd_report_timeout
).
Valid formats are plain
(default) and json
. The threshold defines the minimum number of seconds the placement group is stuck before including it in the returned statistics (default 300 seconds).
10.2.5. Getting placement group maps Copiar enlaceEnlace copiado en el portapapeles!
To get the placement group map for a particular placement group, execute the following:
Syntax
ceph pg map PG_ID
ceph pg map PG_ID
Example
[ceph: root@host01 /]# ceph pg map 1.6c
[ceph: root@host01 /]# ceph pg map 1.6c
Ceph returns the placement group map, the placement group, and the OSD status:
osdmap e13 pg 1.6c (1.6c) -> up [1,0] acting [1,0]
osdmap e13 pg 1.6c (1.6c) -> up [1,0] acting [1,0]
10.2.6. Scrubbing placement groups Copiar enlaceEnlace copiado en el portapapeles!
To scrub a placement group, execute the following:
Syntax
ceph pg scrub PG_ID
ceph pg scrub PG_ID
Ceph checks the primary and any replica nodes, generates a catalog of all objects in the placement group and compares them to ensure that no objects are missing or mismatched, and their contents are consistent. Assuming the replicas all match, a final semantic sweep ensures that all of the snapshot-related object metadata is consistent. Errors are reported via logs.
10.2.7. Marking unfound objects Copiar enlaceEnlace copiado en el portapapeles!
If the cluster has lost one or more objects, and you have decided to abandon the search for the lost data, you must mark the unfound objects as lost
.
If all possible locations have been queried and objects are still lost, you might have to give up on the lost objects. This is possible given unusual combinations of failures that allow the cluster to learn about writes that were performed before the writes themselves are recovered.
Currently the only supported option is "revert", which will either roll back to a previous version of the object or (if it was a new object) forget about it entirely. To mark the "unfound" objects as "lost", execute the following:
Syntax
ceph pg PG_ID mark_unfound_lost revert|delete
ceph pg PG_ID mark_unfound_lost revert|delete
Use this feature with caution, because it might confuse applications that expect the object(s) to exist.