Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 11. Post-deployment operations to manage the Red Hat Ceph Storage cluster


After you deploy your Red Hat OpenStack Platform (RHOSP) environment with containerized Red Hat Ceph Storage, there are some operations you can use to manage the Ceph Storage cluster.

11.1. Disabling configuration overrides

After the Ceph Storage cluster is initially deployed, the cluster is configured to allow the setup of services such as RGW during the overcloud deployment. Once overcloud deployment is complete, director should not be used to make changes to the cluster configuration unless you are scaling up the cluster. Cluster configuration changes should be performed using Ceph commands.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Open the file deployed_ceph.yaml or the file you use in your environment to define the Ceph Storage cluster configuration.
  3. Locate the ApplyCephConfigOverridesOnUpdate parameter.
  4. Change the ApplyCephConfigOverridesOnUpdate parameter value to false.
  5. Save the file.

Additional resources

For more information on the ApplyCephConfigOverridesOnUpdate and CephConfigOverrides parameters, see Overcloud parameters.

11.2. Accessing the overcloud

Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc, in the home directory of the stack user.

Procedure

  1. Run the following command to source the file:

    $ source ~/overcloudrc
    Copy to Clipboard Toggle word wrap

    This loads the necessary environment variables to interact with your overcloud from the undercloud CLI.

  2. To return to interacting with the undercloud, run the following command:

    $ source ~/stackrc
    Copy to Clipboard Toggle word wrap

11.3. Monitoring Red Hat Ceph Storage nodes

After you create the overcloud, check the status of the Ceph cluster to confirm that it works correctly.

Procedure

  1. Log in to a Controller node as the tripleo-admin user:

    $ nova list
    $ ssh tripleo-admin@192.168.0.25
    Copy to Clipboard Toggle word wrap
  2. Check the health of the Ceph cluster:

    $ sudo cephadm shell -- ceph health
    Copy to Clipboard Toggle word wrap

    If the Ceph cluster has no issues, the command reports back HEALTH_OK. This means the Ceph cluster is safe to use.

  3. Log in to an overcloud node that runs the Ceph monitor service and check the status of all OSDs in the Ceph cluster:

    $ sudo cephadm shell -- ceph osd tree
    Copy to Clipboard Toggle word wrap
  4. Check the status of the Ceph Monitor quorum:

    $ sudo cephadm shell -- ceph quorum_status
    Copy to Clipboard Toggle word wrap

    This shows the monitors participating in the quorum and which one is the leader.

  5. Verify that all Ceph OSDs are running:

    $ sudo cephadm shell -- ceph osd stat
    Copy to Clipboard Toggle word wrap

For more information on monitoring Ceph clusters, see Monitoring a Ceph Storage cluster in the Red Hat Ceph Storage Administration Guide.

11.4. Mapping a Block Storage (cinder) type to your new Ceph pool

After you complete the configuration steps, make the performance tiers feature available to RHOSP tenants by using Block Storage (cinder) to create a type that is mapped to the fastpool tier that you created.

Procedure

  1. Log in to the undercloud node as the stack user.
  2. Source the overcloudrc file:

    $ source overcloudrc
    Copy to Clipboard Toggle word wrap
  3. Check the Block Storage volume existing types:

    $ cinder type-list
    Copy to Clipboard Toggle word wrap
  4. Create the new Block Storage volume fast_tier:

    $ cinder type-create fast_tier
    Copy to Clipboard Toggle word wrap
  5. Check that the Block Storage type is created:

    $ cinder type-list
    Copy to Clipboard Toggle word wrap
  6. When the fast_tier Block Storage type is available, set the fastpool as the Block Storage volume back end for the new tier that you created:

    $ cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
    Copy to Clipboard Toggle word wrap
  7. Use the new tier to create new volumes:

    $ cinder create 1 --volume-type fast_tier --name fastdisk
    Copy to Clipboard Toggle word wrap
Note

The Red Hat Ceph Storage documentation provides additional information and procedures for the ongoing maintenance and operation of the Ceph Storage cluster. See Product Documentation for Red Hat Ceph Storage.

Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat