Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 5. Pools, placement groups, and CRUSH configuration
As a storage administrator, you can choose to use the Red Hat Ceph Storage default options for pools, placement groups, and the CRUSH algorithm or customize them for the intended workload.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
5.1. Pools placement groups and CRUSH
When you create pools and set the number of placement groups for the pool, Ceph uses default values when you do not specifically override the defaults.
Red Hat recommends overriding some of the defaults. Specifically, set a pool’s replica size and override the default number of placement groups.
You can set these values when running pool commands.
By default, Ceph makes 3 replicas of objects. If you want to set 4 copies of an object as the default value, a primary copy and three replica copies, reset the default values as shown in osd_pool_default_size
. If you want to allow Ceph to write a lesser number of copies in a degraded state, set osd_pool_default_min_size
to a number less than the osd_pool_default_size
value.
Example
[ceph: root@host01 /]# ceph config set global osd_pool_default_size 4 # Write an object 4 times. [ceph: root@host01 /]# ceph config set global osd_pool_default_min_size 1 # Allow writing one copy in a degraded state.
Ensure you have a realistic number of placement groups. Red Hat recommends approximately 100 per OSD. For example, total number of OSDs multiplied by 100 divided by the number of replicas, that is, osd_pool_default_size
. For 10 OSDs and osd_pool_default_size
= 4, we would recommend approximately (100 * 10) / 4 = 250.
Example
[ceph: root@host01 /]# ceph config set global osd_pool_default_pg_num 250 [ceph: root@host01 /]# ceph config set global osd_pool_default_pgp_num 250
Additional resources
- See all the Red Hat Ceph Storage pool, placement group, and CRUSH configuration options in lAppendix E for specific option descriptions and usage.