Chapter 5. Customizing the storage service for HCI

download PDF

Red Hat OpenStack Platform (RHOSP) director provides the necessary heat templates and environment files to enable a basic Ceph Storage configuration.

Director uses the /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml environment file to add additional configuration to the Ceph cluster deployed by openstack overcloud ceph deploy.

For more information about containerized services in RHOSP, see Configuring a basic overcloud with the CLI tools in Installing and managing Red Hat OpenStack Platform with director.

5.1. Configuring Compute service resources for HCI

Colocating Ceph OSD and Compute services on hyperconverged nodes risks resource contention between Red Hat Ceph Storage and Compute services. This occurs because the services are not aware of the colcation. Resource contention can result in service degradation, which offsets the benefits of hyperconvergence.

Configuring the resources used by the Compute service mitigates resource contention and improves HCI performance.


  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Add the NovaReservedHostMemory parameter to the ceph-overrides.yaml file. The following is a usage example.

        NovaReservedHostMemory: 75000

The NovaReservedHostMemory parameter overrides the default value of reserved_host_memory_mb in /etc/nova/nova.conf. This parameter is set to stop Nova scheduler giving memory, that a Ceph OSD needs, to a virtual machine.

The example above reserves 5 GB per OSD for 10 OSDs per host in addition to the default reserved memory for the hypervisor. In an IOPS-optimized cluster, you can improve performance by reserving more memory per OSD. The 5 GB number is provided as a starting point that you can further refine as necessary.


Include this file when you use the openstack overcloud deploy command.

5.2. Configuring a custom environment file

Director applies basic, default settings to the deployed Red Hat Ceph Storage cluster. You must define additional configuration in a custom environment file.


  1. Log in to the undercloud as the stack user.
  2. Create a file to define the custom configuration.

    vi /home/stack/templates/storage-config.yaml

  3. Add a parameter_defaults section to the file.
  4. Add the custom configuration parameters. For more information about parameter definitions, see Overcloud parameters.

        CinderEnableIscsiBackend: false
        CinderEnableRbdBackend: true
        CinderBackupBackend: ceph
        NovaEnableRbdBackend: true
        GlanceBackend: rbd

    Parameters defined in a custom configuration file override any corresponding default settings in /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml.

  5. Save the file.

Additional resources

The custom configuration is applied during overcloud deployment.

5.3. Enabling Ceph Metadata Server

The Ceph Metadata Server (MDS) runs the ceph-mds daemon. This daemon manages metadata related to files stored on CephFS. CephFS can be consumed natively or through the NFS protocol.


Red Hat supports deploying Ceph MDS with the native CephFS and CephFS NFS back ends for the Shared File Systems service (manila).


  • To enable Ceph MDS, use the following environment file when you deploy the overcloud:


By default, Ceph MDS is deployed on the Controller node. You can deploy Ceph MDS on its own dedicated node.

5.4. Ceph Object Gateway object storage

The Ceph Object Gateway (RGW) provides an interface to access object storage capabilities within a Red Hat Ceph Storage cluster.

When you use director to deploy Ceph, director automatically enables RGW. This is a direct replacement for the Object Storage service (swift). Services that normally use the Object Storage service can use RGW instead without additional configuration. The Object Storage service remains available as an object storage option for upgraded Ceph clusters.

There is no requirement for a separate RGW environment file to enable it. For more information about environment files for other object storage options, see Section 5.5, “Deployment options for Red Hat OpenStack Platform object storage”.

By default, Ceph Storage allows 250 placement groups per Object Storage Daemon (OSD). When you enable RGW, Ceph Storage creates the following six additional pools required by RGW:

  • .rgw.root
  • <zone_name>.rgw.control
  • <zone_name>.rgw.meta
  • <zone_name>.rgw.log
  • <zone_name>.rgw.buckets.index
  • <zone_name>

In your deployment, <zone_name> is replaced with the name of the zone to which the pools belong.

Additional resources

5.5. Deployment options for Red Hat OpenStack Platform object storage

There are three options for deploying overcloud object storage:

  • Ceph Object Gateway (RGW)

    To deploy RGW as described in Section 5.4, “Ceph Object Gateway object storage”, include the following environment file during overcloud deployment:

    -e  environments/cephadm/cephadm.yaml

    This environment file configures both Ceph block storage (RBD) and RGW.

  • Object Storage service (swift)

    To deploy the Object Storage service (swift) instead of RGW, include the following environment file during overcloud deployment:

    -e  environments/cephadm/cephadm-rbd-only.yaml

    The cephadm-rbd-only.yaml file configures Ceph RBD but not RGW.


    If you used the Object Storage service (swift) before upgrading your Red Hat Ceph Storage cluster, you can continue to use the Object Storage service (swift) instead of RGW by replacing the environments/ceph-ansible/ceph-ansible.yaml file with the environments/cephadm/cephadm-rbd-only.yaml during the upgrade. For more information, see Performing a minor update of Red Hat OpenStack Platform.

    Red Hat OpenStack Platform does not support migration from the Object Storage service (swift) to Ceph Object Gateway (RGW).

  • No object storage

    To deploy Ceph with RBD but not with RGW or the Object Storage service (swift), include the following environment files during overcloud deployment:

    -e  environments/cephadm/cephadm-rbd-only.yaml
    -e  environments/disable-swift.yaml

    The cephadm-rbd-only.yaml file configures RBD but not RGW. The disable-swift.yaml file ensures that the Object Storage service (swift) does not deploy.

5.6. Configuring the Block Storage Backup Service to use Ceph

The Block Storage Backup service (cinder-backup) is disabled by default. It must be enabled to use it with Ceph.


To enable the Block Storage Backup service (cinder-backup), use the following environment file when you deploy the overcloud:


5.7. Configuring multiple bonded interfaces for Ceph nodes

Use a bonded interface to combine multiple NICs and add redundancy to a network connection. If you have enough NICs on your Ceph nodes, you can create multiple bonded interfaces on each node to expand redundancy capability.

Use a bonded interface for each network connection the node requires. This provides both redundancy and a dedicated connection for each network.

See Provisioning the overcloud networks in the Installing and managing Red Hat OpenStack Platform with director guide for information and procedures.

5.8. Initiating overcloud deployment for HCI

To implement the changes you made to your Red Hat OpenStack Platform (RHOSP) environment, you must deploy the overcloud.


  • Before undercloud installation, set generate_service_certificate=false in the undercloud.conf file. Otherwise, you must configure SSL/TLS on the overcloud as described in Enabling SSL/TLS on overcloud public endpoints in the Hardening Red Hat OpenStack Platform.

If you want to add Ceph Dashboard during your overcloud deployment, see Adding the Red Hat Ceph Storage Dashboard to an overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director.


  • Deploy the overcloud. The deployment command requires additional arguments, for example:

    $ openstack overcloud deploy --templates -r /home/stack/templates/roles_data_custom.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml \
      -e /home/stack/templates/storage-config.yaml \
      -e /home/stack/templates/deployed-ceph.yaml \

    The example command uses the following options:

    • --templates - Creates the overcloud from the default heat template collection, /usr/share/openstack-tripleo-heat-templates/.
    • -r /home/stack/templates/roles_data_custom.yaml - Specifies a customized roles definition file.
    • -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml - Sets the director to finalize the previously deployed Ceph Storage cluster. This environment file deploys RGW by default. It also creates pools, keys, and daemons.
    • -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml - Enables the Ceph Metadata Server.
    • -e /usr/share/openstack-tripleo-heat-templates/environments/cinder-backup.yaml - Enables the Block Storage Backup service.
    • -e /home/stack/templates/storage-config.yaml - Adds the environment file that contains your custom Ceph Storage configuration.
    • -e /home/stack/templates/deployed-ceph.yaml - Adds the environment file that contains your Ceph cluster settings, as output by the openstack overcloud ceph deploy command run earlier.
    • --ntp-server - Sets the NTP server.


      For a full list of options, run the openstack help overcloud deploy command.

Additional resources

Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.