Chapter 5. Customizing the storage service for HCI
Red Hat OpenStack Platform (RHOSP) director provides the necessary heat templates and environment files to enable a basic Ceph Storage configuration.
Director uses the /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml
environment file to add additional configuration to the Ceph cluster deployed by openstack overcloud ceph deploy
.
For more information about containerized services in RHOSP, see Configuring a basic overcloud with the CLI tools in Director Installation and Usage.
5.1. Configuring Compute service resources for HCI
Colocating Ceph OSD and Compute services on hyperconverged nodes risks resource contention between Red Hat Ceph Storage and Compute services. This occurs because the services are not aware of the colcation. Resource contention can result in service degradation, which offsets the benefits of hyperconvergence.
Configuring the resources used by the Compute service mitigates resource contention and improves HCI performance.
Procedure
- Log in to the undercloud host as the stack user.
Source the stackrc undercloud credentials file:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Add the
NovaReservedHostMemory
parameter to theceph-overrides.yaml
file. The following is a usage example.parameter_defaults: ComputeHCIParameters: NovaReservedHostMemory: 75000
parameter_defaults: ComputeHCIParameters: NovaReservedHostMemory: 75000
Copy to Clipboard Copied!
The NovaReservedHostMemory
parameter overrides the default value of reserved_host_memory_mb
in /etc/nova/nova.conf
. This parameter is set to stop Nova scheduler giving memory, that a Ceph OSD needs, to a virtual machine.
The example above reserves 5 GB per OSD for 10 OSDs per host in addition to the default reserved memory for the hypervisor. In an IOPS-optimized cluster, you can improve performance by reserving more memory per OSD. The 5 GB number is provided as a starting point that you can further refine as necessary.
Include this file when you use the openstack overcloud deploy
command.
5.2. Configuring a custom environment file
Director applies basic, default settings to the deployed Red Hat Ceph Storage cluster. You must define additional configuration in a custom environment file.
Procedure
-
Log in to the undercloud as the
stack
user. Create a file to define the custom configuration.
vi /home/stack/templates/storage-config.yaml
-
Add a
parameter_defaults
section to the file. Add the custom configuration parameters. For more information about parameter definitions, see Overcloud Parameters.
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph NovaEnableRbdBackend: true GlanceBackend: rbd
Copy to Clipboard Copied! NoteParameters defined in a custom configuration file override any corresponding default settings in
/usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml
.- Save the file.
Additional resources
The custom configuration is applied during overcloud deployment.
5.3. Enabling Ceph Metadata Server
The Ceph Metadata Server (MDS) runs the ceph-mds
daemon. This daemon manages metadata related to files stored on CephFS. CephFS can be consumed natively or through the NFS protocol.
Red Hat supports deploying Ceph MDS with the native CephFS and CephFS NFS back ends for the Shared File Systems service (manila).
Procedure
To enable Ceph MDS, use the following environment file when you deploy the overcloud:
/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml
/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml
Copy to Clipboard Copied!
By default, Ceph MDS is deployed on the Controller node. You can deploy Ceph MDS on its own dedicated node.
Additional resources
5.4. Ceph Object Gateway object storage
The Ceph Object Gateway (RGW) provides an interface to access object storage capabilities within a Red Hat Ceph Storage cluster.
When you use director to deploy Ceph, director automatically enables RGW. This is a direct replacement for the Object Storage service (swift). Services that normally use the Object Storage service can use RGW instead without additional configuration. The Object Storage service remains available as an object storage option for upgraded Ceph clusters.
There is no requirement for a separate RGW environment file to enable it. For more information about environment files for other object storage options, see Section 5.5, “Deployment options for Red Hat OpenStack Platform object storage”.
By default, Ceph Storage allows 250 placement groups per Object Storage Daemon (OSD). When you enable RGW, Ceph Storage creates the following six additional pools required by RGW:
-
.rgw.root
-
<zone_name>.rgw.control
-
<zone_name>.rgw.meta
-
<zone_name>.rgw.log
-
<zone_name>.rgw.buckets.index
-
<zone_name>.rgw.buckets.data
In your deployment, <zone_name>
is replaced with the name of the zone to which the pools belong.
Additional resources
- For more information about RGW, see the Red Hat Ceph Storage Object Gateway Guide.
- For more information about using RGW instead of Swift, see the Block Storage Backup Guide.
5.5. Deployment options for Red Hat OpenStack Platform object storage
There are three options for deploying overcloud object storage:
Ceph Object Gateway (RGW)
To deploy RGW as described in Section 5.4, “Ceph Object Gateway object storage”, include the following environment file during overcloud deployment:
-e environments/cephadm/cephadm.yaml
-e environments/cephadm/cephadm.yaml
Copy to Clipboard Copied! This environment file configures both Ceph block storage (RBD) and RGW.
Object Storage service (swift)
To deploy the Object Storage service (swift) instead of RGW, include the following environment file during overcloud deployment:
-e environments/cephadm/cephadm-rbd-only.yaml
-e environments/cephadm/cephadm-rbd-only.yaml
Copy to Clipboard Copied! The
cephadm-rbd-only.yaml
file configures Ceph RBD but not RGW.NoteIf you used the Object Storage service (swift) before upgrading your Red Hat Ceph Storage cluster, you can continue to use the Object Storage service (swift) instead of RGW by replacing the
environments/ceph-ansible/ceph-ansible.yaml
file with theenvironments/cephadm/cephadm-rbd-only.yaml
during the upgrade. For more information, see Keeping Red Hat OpenStack Platform Updated.Red Hat OpenStack Platform does not support migration from the Object Storage service (swift) to Ceph Object Gateway (RGW).
No object storage
To deploy Ceph with RBD but not with RGW or the Object Storage service (swift), include the following environment files during overcloud deployment:
-e environments/cephadm/cephadm-rbd-only.yaml -e environments/disable-swift.yaml
-e environments/cephadm/cephadm-rbd-only.yaml -e environments/disable-swift.yaml
Copy to Clipboard Copied! The
cephadm-rbd-only.yaml
file configures RBD but not RGW. Thedisable-swift.yaml
file ensures that the Object Storage service (swift) does not deploy.
5.6. Configuring the Block Storage Backup Service to use Ceph
The Block Storage Backup service (cinder-backup) is disabled by default. It must be enabled to use it with Ceph.
Procedure
To enable the Block Storage Backup service (cinder-backup), use the following environment file when you deploy the overcloud:
`/usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml`.
`/usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml`.
5.7. Configuring multiple bonded interfaces for Ceph nodes
Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability.
Use a bonded interface for each network connection the node requires. This provides both redundancy and a dedicated connection for each network.
See Provisioning the overcloud networks in the Director Installation and Usage guide for information and procedures.
5.8. Initiating overcloud deployment for HCI
To implement the changes you made to your Red Hat OpenStack Platform (RHOSP) environment, you must deploy the overcloud.
Prerequisites
-
Before undercloud installation, set
generate_service_certificate=false
in theundercloud.conf
file. Otherwise, you must configure SSL/TLS on the overcloud as described in Enabling SSL/TLS on overcloud public endpoints in the Security and Hardening Guide.
If you want to add Ceph Dashboard during your overcloud deployment, see Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director.
Procedure
Deploy the overcloud. The deployment command requires additional arguments, for example:
openstack overcloud deploy --templates -r /home/stack/templates/roles_data_custom.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml \ -e /home/stack/templates/storage-config.yaml \ -e /home/stack/templates/deployed-ceph.yaml \ --ntp-server pool.ntp.org
$ openstack overcloud deploy --templates -r /home/stack/templates/roles_data_custom.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml \ -e /home/stack/templates/storage-config.yaml \ -e /home/stack/templates/deployed-ceph.yaml \ --ntp-server pool.ntp.org
Copy to Clipboard Copied! The example command uses the following options:
-
--templates
- Creates the overcloud from the default heat template collection,/usr/share/openstack-tripleo-heat-templates/
. -
-r /home/stack/templates/roles_data_custom.yaml
- Specifies a customized roles definition file. -
-e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml
- Sets the director to finalize the previously deployed Ceph Storage cluster. This environment file deploys RGW by default. It also creates pools, keys, and daemons. -
-e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml
- Enables the Ceph Metadata Server. -
-e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml
- Enables the Block Storage Backup service. -
-e /home/stack/templates/storage-config.yaml
- Adds the environment file that contains your custom Ceph Storage configuration. -
-e /home/stack/templates/deployed-ceph.yaml
- Adds the environment file that contains your Ceph cluster settings, as output by theopenstack overcloud ceph deploy
command run earlier. --ntp-server pool.ntp.org
- Sets the NTP server.NoteFor a full list of options, run the
openstack help overcloud deploy
command.
-