Chapter 11. Post-deployment operations to manage the Red Hat Ceph Storage cluster
After you deploy your Red Hat OpenStack Platform (RHOSP) environment with containerized Red Hat Ceph Storage, there are some operations you can use to manage the Ceph Storage cluster.
11.1. Disabling configuration overrides
After the Ceph Storage cluster is initially deployed, the cluster is configured to allow the setup of services such as RGW during the overcloud deployment. Once overcloud deployment is complete, director should not be used to make changes to the cluster configuration unless you are scaling up the cluster. Cluster configuration changes should be performed using Ceph commands.
Procedure
-
Log in to the undercloud node as the
stack
user. -
Open the file
deployed_ceph.yaml
or the file you use in your environment to define the Ceph Storage cluster configuration. -
Locate the
ApplyCephConfigOverridesOnUpdate
parameter. -
Change the
ApplyCephConfigOverridesOnUpdate
parameter value tofalse
. - Save the file.
Additional resources
For more information on the ApplyCephConfigOverridesOnUpdate
and CephConfigOverrides
parameters, see Overcloud parameters.
11.2. Accessing the overcloud
Director generates a script to configure and help authenticate interactions with your overcloud from the undercloud. Director saves this file, overcloudrc
, in the home directory of the stack
user.
Procedure
Run the following command to source the file:
source ~/overcloudrc
$ source ~/overcloudrc
Copy to Clipboard Copied! This loads the necessary environment variables to interact with your overcloud from the undercloud CLI.
To return to interacting with the undercloud, run the following command:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied!
11.3. Monitoring Red Hat Ceph Storage nodes
After you create the overcloud, check the status of the Ceph cluster to confirm that it works correctly.
Procedure
Log in to a Controller node as the
tripleo-admin
user:nova list ssh tripleo-admin@192.168.0.25
$ nova list $ ssh tripleo-admin@192.168.0.25
Copy to Clipboard Copied! Check the health of the Ceph cluster:
sudo cephadm shell -- ceph health
$ sudo cephadm shell -- ceph health
Copy to Clipboard Copied! If the Ceph cluster has no issues, the command reports back
HEALTH_OK
. This means the Ceph cluster is safe to use.Log in to an overcloud node that runs the Ceph monitor service and check the status of all OSDs in the Ceph cluster:
sudo cephadm shell -- ceph osd tree
$ sudo cephadm shell -- ceph osd tree
Copy to Clipboard Copied! Check the status of the Ceph Monitor quorum:
sudo cephadm shell -- ceph quorum_status
$ sudo cephadm shell -- ceph quorum_status
Copy to Clipboard Copied! This shows the monitors participating in the quorum and which one is the leader.
Verify that all Ceph OSDs are running:
sudo cephadm shell -- ceph osd stat
$ sudo cephadm shell -- ceph osd stat
Copy to Clipboard Copied!
For more information on monitoring Ceph clusters, see Monitoring a Ceph Storage cluster in the Red Hat Ceph Storage Administration Guide.
11.4. Mapping a Block Storage (cinder) type to your new Ceph pool
After you complete the configuration steps, make the performance tiers feature available to RHOSP tenants by using Block Storage (cinder) to create a type that is mapped to the fastpool
tier that you created.
Procedure
-
Log in to the undercloud node as the
stack
user. Source the
overcloudrc
file:source overcloudrc
$ source overcloudrc
Copy to Clipboard Copied! Check the Block Storage volume existing types:
cinder type-list
$ cinder type-list
Copy to Clipboard Copied! Create the new Block Storage volume fast_tier:
cinder type-create fast_tier
$ cinder type-create fast_tier
Copy to Clipboard Copied! Check that the Block Storage type is created:
cinder type-list
$ cinder type-list
Copy to Clipboard Copied! When the
fast_tier
Block Storage type is available, set thefastpool
as the Block Storage volume back end for the new tier that you created:cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
$ cinder type-key fast_tier set volume_backend_name=tripleo_ceph_fastpool
Copy to Clipboard Copied! Use the new tier to create new volumes:
cinder create 1 --volume-type fast_tier --name fastdisk
$ cinder create 1 --volume-type fast_tier --name fastdisk
Copy to Clipboard Copied!
The Red Hat Ceph Storage documentation provides additional information and procedures for the ongoing maintenance and operation of the Ceph Storage cluster. See Product Documentation for Red Hat Ceph Storage.