Chapter 9. Using director to define performance tiers for varying workloads
Red Hat OpenStack Platform (RHOSP) director deploys Red Hat Ceph Storage performance tiers. Ceph Storage CRUSH rules combine with the CephPools
parameter to use the device classes features. This builds different tiers to accommodate workloads with different performance requirements.
For example, you can define a HDD class for normal workloads and an SSD class that distributes data only over SSDs for high performance loads. In this scenario, when you create a new Block Storage volume, you can choose the performance tier, either HDDs or SSDs.
For more information on CRUSH rule creation, see Configuring CRUSH hierarchies.
Defining performance tiers in an existing environment can result in data movement in the Ceph Storage cluster. Director uses cephadm
during the stack update. The cephadm
application does not have the logic to verify if a pool exists and contains data. Changing the default CRUSH rule associated with a pool results in data movement. If the pool contains a large amount of data, that data will be moved.
If you require assistance or recommendations for adding or removing nodes, contact Red Hat support.
Ceph Storage automatically detects the disk type and assigns it to the corresponding device class; either HDD, SSD, or NVMe; based on the hardware properties exposed by the Linux kernel.
Prerequisites
- For new deployments, use Red Hat Ceph Storage (RHCS) version 5.2 or later.
9.1. Configuring performance tiers
To deploy different Red Hat Ceph Storage performance tiers, create a new environment file that contains the CRUSH map details and include it in the deployment command. Director does not expose specific parameters for this feature, but you can generate the tripleo-ansible
expected variables.
Performance tier configuration can be combined with CRUSH hierarchies. See Configuring CRUSH hierarchies for information on CRUSH rule creation.
In the example procedure, each Ceph Storage node contains three OSDs: sdb
and sdc
are spinning disks and sdc
is an SSD. Ceph automatically detects the correct disk type. You then configure two CRUSH rules, HDD and SSD, to map to the two respective device classes.
The HDD rule is the default and applies to all pools unless you configure pools with a different rule.
Finally, you create an extra pool called fastpool
and map it to the SSD rule. This pool is ultimately exposed through a Block Storage (cinder) back end. Any workload that consumes this Block Storage back end is backed by SSD for fast performances only. You can leverage this for either data or boot from volume.
- WARNING
-
Defining performance tiers in an existing environment might result in massive data movement in the Ceph cluster.
cephadm
, which director triggers during the stack update, does not have logic to verify whether a pool is already defined in the Ceph cluster and if it contains data. This means that defining performance tiers in an existing environment can be dangerous because the change of the default CRUSH rule that is associated with a pool results in data movement. If you require assistance or recommendations for adding or removing nodes, contact Red Hat support.
Procedure
-
Log in to the undercloud node as the
stack
user. -
Create an environment file, such as
/home/stack/templates/ceph-config.yaml
, to contain the Ceph config parameters and the device classes variables. Alternatively, you can add the following configurations to an existing environment file. Add the
CephCrushRules
parameters. Thecrush_rules
parameter must contain a rule for each class that you define or that Ceph detects automatically. When you create a new pool, if no rule is specified, the rule that you want Ceph to use as the default is selected.Copy to Clipboard Copied! Toggle word wrap Toggle overflow CephCrushRules: crush_rules: - name: HDD root: default type: host class: hdd default: true - name: SSD root: default type: host class: ssd default: false
CephCrushRules: crush_rules: - name: HDD root: default type: host class: hdd default: true - name: SSD root: default type: host class: ssd default: false
Add the
CephPools
parameter:-
Use the
rule_name
parameter to specify the tier for each pool that does not use the default rule. In the following example, thefastpool
pool uses the SSD device class that is configured as a fast tier, to manage Block Storage volumes. Use the
CinderRbdExtraPools
parameter to configurefastpool
as a Block Storage back end.Copy to Clipboard Copied! Toggle word wrap Toggle overflow CephPools: - name: fastpool rule_name: SSD application: rbd CinderRbdExtraPools: fastpool
CephPools: - name: fastpool rule_name: SSD application: rbd CinderRbdExtraPools: fastpool
-
Use the
Use the following example to ensure that your environment file contains the correct values:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow parameter_defaults: crush_rules: - name: HDD root: default type: host class: hdd default: true - name: SSD root: default type: host class: ssd default: false CinderRbdExtraPools: fastpool CephPools: - name: fastpool rule_name: SSD application: rbd
parameter_defaults: crush_rules: - name: HDD root: default type: host class: hdd default: true - name: SSD root: default type: host class: ssd default: false CinderRbdExtraPools: fastpool CephPools: - name: fastpool rule_name: SSD application: rbd
Include the new environment file in the
openstack overcloud deploy
command.Copy to Clipboard Copied! Toggle word wrap Toggle overflow openstack overcloud deploy \ --templates \ …
$ openstack overcloud deploy \ --templates \ … -e <other_overcloud_environment_files> \ -e /home/stack/templates/ceph-config.yaml \ …
Replace
<other_overcloud_environment_files>
with the list of other environment files that are part of your deployment.
If you apply the environment file to an existing Ceph cluster, the pre-existing Ceph pools are not updated with the new rules. For this reason, you must enter the following command after the deployment completes to set the rules to the specified pools.
ceph osd pool set <pool> crush_rule <rule>
$ ceph osd pool set <pool> crush_rule <rule>
- Replace <pool> with the name of the pool that you want to apply the new rule to.
-
Replace <rule> with one of the rule names that you specified with the
crush_rules
parameter.
For every rule that you change with this command, update the existing entry or add a new entry in the CephPools
parameter in your existing templates:
CephPools: - name: <pool> rule_name: <rule> application: rbd
CephPools:
- name: <pool>
rule_name: <rule>
application: rbd
9.2. Verifying CRUSH rules and pools
Verify your CRUSH rules and pools settings.
- WARNING
-
Defining performance tiers in an existing environment might result in massive data movement in the Ceph cluster.
tripleo-ansible
, which director triggers during the stack update, does not have logic to check if a pool is already defined in the Ceph cluster and if it contains data. This means that defining performance tiers in an existing environment can be dangerous because the change of the default CRUSH rule that is associated with a pool results in data movement. If you require assistance or recommendations for adding or removing nodes, contact Red Hat support.
Procedure
-
Log in to the overcloud Controller node as the
tripleo-admin
user. To verify that your OSD tiers are successfully set, enter the following command.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell ceph osd tree
$ sudo cephadm shell ceph osd tree
-
In the resulting tree view, verify that the
CLASS
column displays the correct device class for each OSD that you set. Also verify that the OSDs are correctly assigned to the device classes with the following command.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell ceph osd crush tree --show-shadow
$ sudo cephadm shell ceph osd crush tree --show-shadow
Compare the resulting hierarchy with the results of the following command to ensure that the same values apply for each rule.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell ceph osd crush rule dump <rule_name>
$ sudo cephadm shell ceph osd crush rule dump <rule_name>
- Replace <rule_name> with the name of the rule you want to check.
Verify that the rules name and ID that you created are correct according to the
crush_rules
parameter that you used during deployment.Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell ceph osd crush rule dump | grep -E "rule_(id|name)"
$ sudo cephadm shell ceph osd crush rule dump | grep -E "rule_(id|name)"
Verify that the Ceph pools are tied to the correct CRUSH rule ID that you retrieved in Step 3.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph osd dump | grep pool
$ sudo cephadm shell -- ceph osd dump | grep pool
- For each pool, ensure that the rule ID matches the rule name that you expect.