Chapter 11. Post-deployment operations to manage the Red Hat Ceph Storage cluster
After you deploy your Red Hat OpenStack Platform (RHOSP) environment with containerized Red Hat Ceph Storage, there are some operations you can use to manage the Ceph Storage cluster.
11.1. Disabling configuration overrides
After the Ceph Storage cluster is initially deployed, the cluster is configured to allow the setup of services such as RGW during the overcloud deployment. Once overcloud deployment is complete, director should not be used to make changes to the cluster configuration unless you are scaling up the cluster. Cluster configuration changes should be performed using Ceph commands.
Procedure
-
Log in to the undercloud node as the
stack
user. -
Open the file
deployed_ceph.yaml
or the file you use in your environment to define the Ceph Storage cluster configuration. -
Locate the
ApplyCephConfigOverridesOnUpdate
parameter. -
Change the
ApplyCephConfigOverridesOnUpdate
parameter value tofalse
. - Save the file.
Additional resources
For more information on the ApplyCephConfigOverridesOnUpdate
and CephConfigOverrides
parameters, see Overcloud parameters.
11.2. Accessing the overcloud
Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc
, in the home directory of the stack
user.
Procedure
Run the following command to source the file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow source ~/overcloudrc
$ source ~/overcloudrc
This loads the necessary environment variables to interact with your overcloud from the undercloud CLI.
To return to interacting with the undercloud, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow source ~/stackrc
$ source ~/stackrc
11.3. Monitoring Red Hat Ceph Storage nodes
After you create the overcloud, check the status of the Ceph cluster to confirm that it works correctly.
Procedure
Log in to a Controller node as the
tripleo-admin
user:Copy to Clipboard Copied! Toggle word wrap Toggle overflow nova list ssh tripleo-admin@192.168.0.25
$ nova list $ ssh tripleo-admin@192.168.0.25
Check the health of the Ceph cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph health
$ sudo cephadm shell -- ceph health
If the Ceph cluster has no issues, the command reports back
HEALTH_OK
. This means the Ceph cluster is safe to use.Log in to an overcloud node that runs the Ceph monitor service and check the status of all OSDs in the Ceph cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph osd tree
$ sudo cephadm shell -- ceph osd tree
Check the status of the Ceph Monitor quorum:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph quorum_status
$ sudo cephadm shell -- ceph quorum_status
This shows the monitors participating in the quorum and which one is the leader.
Verify that all Ceph OSDs are running:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo cephadm shell -- ceph osd stat
$ sudo cephadm shell -- ceph osd stat
For more information on monitoring Ceph clusters, see Monitoring a Ceph Storage cluster in the Red Hat Ceph Storage Administration Guide.
11.4. Mapping a Block Storage (cinder) type to your new Ceph pool
After you complete the configuration steps, make the performance tiers feature available to RHOSP tenants by using Block Storage (cinder) to create a type that is mapped to the fastpool
tier that you created.
Procedure
-
Log in to the undercloud node as the
stack
user. Source the
overcloudrc
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow source overcloudrc
$ source overcloudrc
Check the Block Storage volume existing types:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cinder type-list
$ cinder type-list
Create the new Block Storage volume fast_tier:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cinder type-create fast_tier
$ cinder type-create fast_tier
Check that the Block Storage type is created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cinder type-list
$ cinder type-list
When the
fast_tier
Block Storage type is available, set thefastpool
as the Block Storage volume back end for the new tier that you created:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
$ cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
Use the new tier to create new volumes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cinder create 1 --volume-type fast_tier --name fastdisk
$ cinder create 1 --volume-type fast_tier --name fastdisk
The Red Hat Ceph Storage documentation provides additional information and procedures for the ongoing maintenance and operation of the Ceph Storage cluster. See Product Documentation for Red Hat Ceph Storage.