Chapter 5. Customizing the storage service
The director heat template collection contains the necessary templates and environment files to enable a basic Ceph Storage configuration.
Director uses the /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml
environment file to add configuration to the Ceph Storage cluster deployed by openstack overcloud ceph deploy
and integrate it with your overcloud during deployment.
5.1. Configuring a custom environment file
Director applies basic, default settings to the deployed Red Hat Ceph Storage cluster. You must define additional configuration in a custom environment file.
Procedure
-
Log in to the undercloud as the
stack
user. Create a file to define the custom configuration.
vi /home/stack/templates/storage-config.yaml
-
Add a
parameter_defaults
section to the file. Add the custom configuration parameters. For more information about parameter definitions, see Overcloud parameters.
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd
NoteParameters defined in a custom configuration file override any corresponding default settings in
/usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml
.- Save the file.
Additional resources
The custom configuration is applied during overcloud deployment.
5.2. Red Hat Ceph Storage placement groups
Placement groups (PGs) facilitate dynamic and efficient object tracking at scale. In the event of OSD failure or Ceph Storage cluster rebalancing, Ceph can move or replicate a placement group and the contents of the placement group. This allows a Ceph Storage cluster to rebalance and recover efficiently.
The placement group and replica count settings are not changed from the defaults unless the following parameters are included in a Ceph configuration file:
-
osd_pool_default_size
-
osd_pool_default_pg_num
-
osd_pool_default_pgp_num
When the overcloud is deployed with the openstack overcloud deploy
command, a pool is created for every enabled Red Hat OpenStack Platform service. For example, the following command creates pools for the Compute service (nova), the Block Storage service (cinder), and the Image service (glance):
openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml
Adding -e environments/cinder-backup.yaml
to the command, creates a pool called backups
:
openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml -e environments/cinder-backup.yaml
It is not necessary to configure a placement group number per pool; the pg_autoscale_mode
attribute is enabled by default. However, it is recommended to configure the target_size_ratio
or pg_num
attributes. This minimimzes data rebalancing.
To set the target_size_ratio
attribute per pool, use a configuration file entry similar to the following example:
parameter_defaults: CephPools: - name: volumes target_size_ratio: 0.4 application: rbd - name: images target_size_ratio: 0.1 application: rbd - name: vms target_size_ratio: 0.3 application: rbd
In this example, the percentage of data used per service will be:
- Cinder volumes - 40%
- Glance images - 10%
- Nova vms - 30%
- Free space for other pools - 20%
Set these values based on your expected usage . If you do not override the CephPools
parameter, each pool uses the default placement group number. Though the autoscaler will adjust this number automatically over time based on usage, the data will be moved within the Ceph cluster. This uses computational resources.
If you prefer to set a placement group number instead of a target size ratio, replace target_size_ratio
in the example with pg_num
. Use a different integer per pool based on your expected usage.
See the Red Hat Ceph Storage Hardware Guide for Red Hat Ceph Storage processor, network interface card, and power management interface recommendations.
5.3. Enabling Ceph Metadata Server
The Ceph Metadata Server (MDS) runs the ceph-mds
daemon. This daemon manages metadata related to files stored on CephFS. CephFS can be consumed natively or through the NFS protocol.
Red Hat supports deploying Ceph MDS with the native CephFS and CephFS NFS back ends for the Shared File Systems service (manila).
Procedure
To enable Ceph MDS, use the following environment file when you deploy the overcloud:
/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml
By default, Ceph MDS is deployed on the Controller node. You can deploy Ceph MDS on its own dedicated node.
Additional resources
5.4. Ceph Object Gateway object storage
The Ceph Object Gateway (RGW) provides an interface to access object storage capabilities within a Red Hat Ceph Storage cluster.
When you use director to deploy Ceph, director automatically enables RGW. This is a direct replacement for the Object Storage service (swift). Services that normally use the Object Storage service can use RGW instead without additional configuration. The Object Storage service remains available as an object storage option for upgraded Ceph clusters.
There is no requirement for a separate RGW environment file to enable it. For more information about environment files for other object storage options, see Section 5.5, “Deployment options for Red Hat OpenStack Platform object storage”.
By default, Ceph Storage allows 250 placement groups per Object Storage Daemon (OSD). When you enable RGW, Ceph Storage creates the following six additional pools required by RGW:
-
.rgw.root
-
<zone_name>.rgw.control
-
<zone_name>.rgw.meta
-
<zone_name>.rgw.log
-
<zone_name>.rgw.buckets.index
-
<zone_name>.rgw.buckets.data
In your deployment, <zone_name>
is replaced with the name of the zone to which the pools belong.
Additional resources
- For more information about RGW, see the Red Hat Ceph Storage Object Gateway Guide.
- For more information about using RGW instead of Swift, see the Backing up Block Storage volumes guide.
5.5. Deployment options for Red Hat OpenStack Platform object storage
There are three options for deploying overcloud object storage:
Ceph Object Gateway (RGW)
To deploy RGW as described in Section 5.4, “Ceph Object Gateway object storage”, include the following environment file during overcloud deployment:
-e environments/cephadm/cephadm.yaml
This environment file configures both Ceph block storage (RBD) and RGW.
Object Storage service (swift)
To deploy the Object Storage service (swift) instead of RGW, include the following environment file during overcloud deployment:
-e environments/cephadm/cephadm-rbd-only.yaml
The
cephadm-rbd-only.yaml
file configures Ceph RBD but not RGW.NoteIf you used the Object Storage service (swift) before upgrading your Red Hat Ceph Storage cluster, you can continue to use the Object Storage service (swift) instead of RGW by replacing the
environments/ceph-ansible/ceph-ansible.yaml
file with theenvironments/cephadm/cephadm-rbd-only.yaml
during the upgrade. For more information, see Performing a minor update of Red Hat OpenStack Platform.Red Hat OpenStack Platform does not support migration from the Object Storage service (swift) to Ceph Object Gateway (RGW).
No object storage
To deploy Ceph with RBD but not with RGW or the Object Storage service (swift), include the following environment files during overcloud deployment:
-e environments/cephadm/cephadm-rbd-only.yaml -e environments/disable-swift.yaml
The
cephadm-rbd-only.yaml
file configures RBD but not RGW. Thedisable-swift.yaml
file ensures that the Object Storage service (swift) does not deploy.
5.6. Configuring the Block Storage Backup Service to use Ceph
The Block Storage Backup service (cinder-backup) is disabled by default. It must be enabled to use it with Ceph.
Procedure
To enable the Block Storage Backup service (cinder-backup), use the following environment file when you deploy the overcloud:
`/usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml`.
5.7. Configuring multiple bonded interfaces for Ceph nodes
Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability.
Use a bonded interface for each network connection the node requires. This provides both redundancy and a dedicated connection for each network.
See Provisioning the overcloud networks in the Installing and managing Red Hat OpenStack Platform with director guide for information and procedures.