Este conteúdo não está disponível no idioma selecionado.
Chapter 6. Defining performance tiers for varying workloads in a Ceph Storage cluster with director
You can use Red Hat OpenStack Platform (RHOSP) director to deploy different Red Hat Ceph Storage performance tiers. You can combine Ceph CRUSH rules and the CephPools director parameter to use the device classes feature and build different tiers to accommodate workloads that have different performance requirements. For example, you can define a HDD class for normal workloads and an SSD class that distributes data only over SSDs for high performance loads. In this scenario, when you create a new Block Storage volume, you can choose the performance tier, either HDDs or SSDs.
- WARNING
-
Defining performance tiers in an existing environment might result in massive data movement in the Ceph cluster.
ceph-ansible, which director triggers during the stack update, does not have logic to check if a pool is already defined in the cluster and if it contains data. This means that defining performance tiers in an existing environment can be dangerous because the change of the default CRUSH rule that is associated with a pool results in data movement. If you require assistance or recommendations for adding or removing nodes, contact Red Hat support.
Ceph autodetects the disk type and assigns it to the corresponding device class, either HDD, SSD, or NVMe based on the hardware properties exposed by the Linux kernel. However, you can also customize the category according to your needs.
Prerequisites
- For new deployments, Red Hat Ceph Storage (RHCS) version 4.1 or later.
- For existing deployments, Red Hat Ceph Storage (RHCS) version 4.2 or later.
To deploy different Red Hat Ceph Storage performance tiers, create a new environment file that contains the CRUSH map details and then include it in the deployment command.
In the following procedures, each Ceph Storage node contains three OSDs, sdb and sdc are spinning disks and sdc is a SSD. Ceph automatically detects the correct disk type. You then configure two CRUSH rules, HDD and SSD, to map to the two respective device classes. The HDD rule is the default and applies to all pools unless you configure pools with a different rule.
Finally, you create an extra pool called fastpool and map it to the SSD rule. This pool is ultimately exposed through a Block Storage (cinder) back end. Any workload that consumes this Block Storage back end is backed by SSD only for fast performances. You can leverage this for either data or boot from volume.
6.1. Configuring the performance tiers Copiar o linkLink copiado para a área de transferência!
- WARNING
-
Defining performance tiers in an existing environment might result in massive data movement in the Ceph cluster.
ceph-ansible, which director triggers during the stack update, does not have logic to check if a pool is already defined in the cluster and if it contains data. This means that defining performance tiers in an existing environment can be dangerous because the change of the default CRUSH rule that is associated with a pool results in data movement. If you require assistance or recommendations for adding or removing nodes, contact Red Hat support.
Director does not expose specific parameters to cover this feature, however, you can generate the ceph-ansible expected variables by completing the following steps.
Procedure
-
Log in to the undercloud node as the
stackuser. -
Create an environment file, such as
/home/stack/templates/ceph-config.yaml, to contain the Ceph config parameters and the device classes variables. Alternatively, you can add the following configurations to an existing environment file. In the environment file, use the
CephAnsibleDisksConfigparameter to list the block devices that you want to use as Ceph OSDs:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Ceph automatically detects the type of disk and assigns it to the corresponding device class. However, you can also use the
crush_device_classproperty to force a specific device to belong to a specific class or create your own custom classes. The following example contains the same list of OSDs with specified classes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
CephAnsibleExtraVarsparameters. Thecrush_rulesparameter must contain a rule for each class that you define or that Ceph detects automatically. When you create a new pool, if no rule is specified, the rule that you want Ceph to use as the default is selected.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
CephPoolsparameter:-
Use the
rule_nameparameter to specify the tier for each pool that does not use the default rule. In the following example, thefastpoolpool uses the SSD device class that is configured as a fast tier, to manage Block Storage volumes. Replace
<appropriate_PG_num>with the appropriate number of placement groups (PGs). Alternatively, use the placement group auto-scaler to calculate the number of PGs for the Ceph pools.For more information, see Assigning custom attributes to different Ceph pools.
Use the
CinderRbdExtraPoolsparameter to configurefastpoolas a Block Storage back end.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Use the
Use the following example to ensure that your environment file contains the correct values:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Include the new environment file in the
openstack overcloud deploycommand.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<other_overcloud_environment_files>with the list of environment files that are part of your deployment.
If you apply the environment file to an existing Ceph cluster, the pre-existing Ceph pools are not updated with the new rules. For this reason, you must enter the following command after the deployment completes to set the rules to the specified pools.
ceph osd pool set <pool> crush_rule <rule>
$ ceph osd pool set <pool> crush_rule <rule>
-
Replace
<pool>with the name of the pool that you want to apply the new rule to. -
Replace
<rule>with one of the rule names that you specified with thecrush_rulesparameter. -
Replace
<appropriate_PG_num>with the appropriate number of placement groups or atarget_size_ratioand setpg_autoscale_modetotrue.
For every rule that you change with this command, update the existing entry or add a new entry in the CephPools parameter in your existing templates:
CephPools:
- name: <pool>
pg_num: <appropriate_PG_num>
rule_name: <rule>
application: rbd
CephPools:
- name: <pool>
pg_num: <appropriate_PG_num>
rule_name: <rule>
application: rbd
6.2. Mapping a Block Storage (cinder) type to your new Ceph pool Copiar o linkLink copiado para a área de transferência!
- WARNING
-
Defining performance tiers in an existing environment might result in massive data movement in the Ceph cluster.
ceph-ansible, which director triggers during the stack update, does not have logic to check if a pool is already defined in the cluster and if it contains data. This means that defining performance tiers in an existing environment can be dangerous because the change of the default CRUSH rule that is associated with a pool results in data movement. If you require assistance or recommendations for adding or removing nodes, contact Red Hat support.
After you complete the configuration steps, make the performance tiers feature available to RHOSP tenants by using Block Storage (cinder) to create a type that is mapped to the fastpool tier that you created.
Procedure
-
Log in to the undercloud node as the
stackuser. Source the
overcloudrcfile:source overcloudrc
$ source overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the Block Storage volume existing types:
cinder type-list
$ cinder type-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new Block Storage volume fast_tier:
cinder type-create fast_tier
$ cinder type-create fast_tierCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the Block Storage type is created:
cinder type-list
$ cinder type-listCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the
fast_tierBlock Storage type is available, set thefastpoolas the Block Storage volume back end for the new tier that you created:cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
$ cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the new tier to create new volumes:
cinder create 1 --volume-type fast_tier --name fastdisk
$ cinder create 1 --volume-type fast_tier --name fastdiskCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3. Verifying that the CRUSH rules are created and that your pools are set to the correct CRUSH rule Copiar o linkLink copiado para a área de transferência!
- WARNING
-
Defining performance tiers in an existing environment might result in massive data movement in the Ceph cluster.
ceph-ansible, which director triggers during the stack update, does not have logic to check if a pool is already defined in the cluster and if it contains data. This means that defining performance tiers in an existing environment can be dangerous because the change of the default CRUSH rule that is associated with a pool results in data movement. If you require assistance or recommendations for adding or removing nodes, contact Red Hat support.
Procedure
-
Log in to the overcloud Controller node as the
heat-adminuser. To verify that your OSD tiers are successfully set, enter the following command. Replace
<controller_hostname>with the name of your host Controller node.sudo podman exec -it ceph-mon-<controller_hostname> ceph osd tree
$ sudo podman exec -it ceph-mon-<controller_hostname> ceph osd treeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - In the resulting tree view, verify that the CLASS column displays the correct device class for each OSD that you set.
Also verify that the OSDs are correctly assigned to the device classes with following command. Replace
<controller_hostname>with the name of your host Controller node.sudo podman exec -it ceph-mon-<controller_hostname> ceph osd crush tree --show-shadow
$ sudo podman exec -it ceph-mon-<controller_hostname> ceph osd crush tree --show-shadowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Compare the resulting hierarchy with the results of the following command to ensure that the same values apply for each rule.
-
Replace
<controller_hostname>with the name of your host Controller node. Replace
<rule_name>with the name of the rule you want to check.sudo podman exec <controller_hostname> ceph osd crush rule dump <rule_name>
$ sudo podman exec <controller_hostname> ceph osd crush rule dump <rule_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace
Verify that the rules name and ID that you created are correct according to the
crush_rulesparameter that you used during deployment. Replace<controller_hostname>with the name of your host Controller node.sudo podman exec -it ceph-mon-<controller_hostname> ceph osd crush rule dump | grep -E "rule_(id|name)"
$ sudo podman exec -it ceph-mon-<controller_hostname> ceph osd crush rule dump | grep -E "rule_(id|name)"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Ceph pools are tied to the correct CRUSH rule ID that you retrieved in Step 3. Replace
<controller_hostname>with the name of your host Controller node.sudo podman exec -it ceph-mon-<controller_hostname> ceph osd dump | grep pool
$ sudo podman exec -it ceph-mon-<controller_hostname> ceph osd dump | grep poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow - For each pool, ensure that the rule ID matches the rule name that you expect.