Chapter 11. Post-deployment operations to manage the Red Hat Ceph Storage cluster


After you deploy your Red Hat OpenStack Platform (RHOSP) environment with containerized Red Hat Ceph Storage, there are some operations you can use to manage the Ceph Storage cluster.

11.1. Disabling configuration overrides

After the Ceph Storage cluster is initially deployed, the cluster is configured to allow the setup of services such as RGW during the overcloud deployment. Once overcloud deployment is complete, director should not be used to make changes to the cluster configuration unless you are scaling up the cluster. Cluster configuration changes should be performed using Ceph commands.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Open the file deployed_ceph.yaml or the file you use in your environment to define the Ceph Storage cluster configuration.
  3. Locate the ApplyCephConfigOverridesOnUpdate parameter.
  4. Change the ApplyCephConfigOverridesOnUpdate parameter value to false.
  5. Save the file.

Additional resources

For more information on the ApplyCephConfigOverridesOnUpdate and CephConfigOverrides parameters, see Overcloud parameters.

11.2. Accessing the overcloud

Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc, in the home directory of the stack user.

Procedure

  1. Run the following command to source the file:

    Copy to Clipboard Toggle word wrap
    $ source ~/overcloudrc

    This loads the necessary environment variables to interact with your overcloud from the undercloud CLI.

  2. To return to interacting with the undercloud, run the following command:

    Copy to Clipboard Toggle word wrap
    $ source ~/stackrc

11.3. Monitoring Red Hat Ceph Storage nodes

After you create the overcloud, check the status of the Ceph cluster to confirm that it works correctly.

Procedure

  1. Log in to a Controller node as the tripleo-admin user:

    Copy to Clipboard Toggle word wrap
    $ nova list
    $ ssh tripleo-admin@192.168.0.25
  2. Check the health of the Ceph cluster:

    Copy to Clipboard Toggle word wrap
    $ sudo cephadm shell -- ceph health

    If the Ceph cluster has no issues, the command reports back HEALTH_OK. This means the Ceph cluster is safe to use.

  3. Log in to an overcloud node that runs the Ceph monitor service and check the status of all OSDs in the Ceph cluster:

    Copy to Clipboard Toggle word wrap
    $ sudo cephadm shell -- ceph osd tree
  4. Check the status of the Ceph Monitor quorum:

    Copy to Clipboard Toggle word wrap
    $ sudo cephadm shell -- ceph quorum_status

    This shows the monitors participating in the quorum and which one is the leader.

  5. Verify that all Ceph OSDs are running:

    Copy to Clipboard Toggle word wrap
    $ sudo cephadm shell -- ceph osd stat

For more information on monitoring Ceph clusters, see Monitoring a Ceph Storage cluster in the Red Hat Ceph Storage Administration Guide.

11.4. Mapping a Block Storage (cinder) type to your new Ceph pool

After you complete the configuration steps, make the performance tiers feature available to RHOSP tenants by using Block Storage (cinder) to create a type that is mapped to the fastpool tier that you created.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Source the overcloudrc file:

    Copy to Clipboard Toggle word wrap
    $ source overcloudrc
  3. Check the Block Storage volume existing types:

    Copy to Clipboard Toggle word wrap
    $ cinder type-list
  4. Create the new Block Storage volume fast_tier:

    Copy to Clipboard Toggle word wrap
    $ cinder type-create fast_tier
  5. Check that the Block Storage type is created:

    Copy to Clipboard Toggle word wrap
    $ cinder type-list
  6. When the fast_tier Block Storage type is available, set the fastpool as the Block Storage volume back end for the new tier that you created:

    Copy to Clipboard Toggle word wrap
    $ cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
  7. Use the new tier to create new volumes:

    Copy to Clipboard Toggle word wrap
    $ cinder create 1 --volume-type fast_tier --name fastdisk
Note

The Red Hat Ceph Storage documentation provides additional information and procedures for the ongoing maintenance and operation of the Ceph Storage cluster. See Product Documentation for Red Hat Ceph Storage.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.