Chapter 4. Verifying external Red Hat Ceph Storage cluster integration
After you deploy the overcloud, confirm that Red Hat OpenStack Platform (RHOSP) services can write to the Red Hat Ceph Storage cluster.
RHOSP does not support the use of Ceph clone format v2 or later. Deleting images or volumes from a Ceph Storage cluster that has Ceph clone format v2 enabled might cause unpredictable behavior and potential loss of data. Therefore, do not use either of the following methods that enable Ceph clone format v2:
-
Setting
rbd default clone format = 2
-
Running
ceph osd set-require-min-compat-client mimic
4.1. Gathering IDs
To verify that you integrated a Red Hat Ceph Storage cluster, you must first create an image, a Compute instance, a Block Storage volume, and a file share and gather their respective IDs.
Procedure
- Create an image with the Image service (glance). For more information about how to create an image, see Import an image in the Creating and Managing Images guide.
- Record the image ID for later use.
- Create a Compute (nova) instance. For more information about how to create an instance, see Creating an instance in the Creating and Managing Instances guide.
- Record the instance ID for later use.
- Create a Block Storage (cinder) volume. For more information about how to create a Block Storage volume, see Create a volume in the Storage Guide.
- Record the volume ID for later use.
- Create a file share by using the Shared File Systems service (manila). For more information about how to create a file share, see Creating a share in the Storage Guide.
- List the export path of the share and record the UUID in the suffix for later use. For more information about how to list the export path of the share, see Listing shares and exporting information in the Storage Guide.
4.2. Verifying the Red Hat Ceph Storage cluster
When you configure an external Red Hat Ceph Storage cluster, you create pools and a client.openstack
user to access those pools. After you deploy the overcloud, you can use the file that contains the credentials of the client.openstack
user to list the contents of Red Hat OpenStack Platform (RHOSP) pools.
List the contents of the pools and confirm that the IDs of the Image service (glance) image, the Compute (nova) instance, the Block Storage (cinder) volume, and the Shared File Systems service (manila) file share exist on the Ceph Storage cluster.
Procedure
Log in to the undercloud as the
stack
user and source thestackrc
credentials file:$ source ~/stackrc
List the available servers to retrieve the IP addresses of nodes on the system:
$ openstack server list +---------------+----------------+---------------+ | ID | Name | Status | Networks | Image | Flavor | +---------------+----------------+---------------+ | d5a621bd-d109-41ae-a381-a42414397802 | compute-0 | ACTIVE | ctlplane=192.168.24.31 | overcloud-full | compute | | 496ab196-d6cb-447d-a118-5bafc5166cf2 | controller-0 | ACTIVE | ctlplane=192.168.24.37 | overcloud-full | controller | | c01e730d-62f2-426a-a964-b31448f250b3 | controller-2 | ACTIVE | ctlplane=192.168.24.55 | overcloud-full | controller | | 36df59b3-66f3-452e-9aec-b7e7f7c54b86 | controller-1 | ACTIVE | ctlplane=192.168.24.39 | overcloud-full | controller | | f8f00497-246d-4e40-8a6a-b5a60fa66483 | compute-1 | ACTIVE | ctlplane=192.168.24.10 | overcloud-full | compute |
Use SSH to log in to any Compute node:
$ ssh heat-admin@192.168.24.31
Switch to the root user:
[heat-admin@compute-0 ~]$ sudo su -
Confirm that the files
/etc/ceph/ceph.conf
and/etc/ceph/ceph.client.openstack.keyring
exist:[root@compute-0 ~]# ls -l /etc/ceph/ceph.conf -rw-r--r--. 1 root root 1170 Sep 29 23:25 /etc/ceph/ceph.conf [root@compute-0 ~]# ls -l /etc/ceph/ceph.client.openstack.keyring -rw-------. 1 ceph ceph 253 Sep 29 23:25 /etc/ceph/ceph.client.openstack.keyring
Enter the following command to force the
nova_compute
container to use therbd
command to list the contents of the appropriate pool.# podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms
The pool name must match the pool names of the images, VMs, volumes, and shares that you created when you configured the Ceph Storage cluster. The IDs of the image, Compute instance, volume, and share must match the IDs that you recorded in Gathering IDs.
NoteThe example command is prefixed with
podman exec nova_compute
because/usr/bin/rbd
, which is provided by theceph-common package
, is not installed on overcloud nodes by default. However, it is available in thenova_compute
container. The command lists block device images. For more information about listing block device images, see Listing the block device images in the Ceph Storage Block Device Guide.The following examples show how to confirm whether an ID for each service is present for each pool by using the IDs from Gathering IDs.
# podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4
To verify the existence of the Shared File Systems service share, you must log into a Controller node:
# podman exec openstack-manila-share-podman-0 ceph -n client.manila fs subvolume ls cephfs | grep ec99db3c-0077-40b7-b09e-8a110e3f73c1
4.3. Troubleshooting failed verification
If the verification procedures fail, verify that the Ceph key for the openstack.client
user and the Red Hat Ceph Storage monitor IPs or hostnames can be used together to read, write, and delete from the Ceph Storage pools that you created for the Red Hat OpenStack Platform (RHOSP).
Procedure
To shorten the amount of typing you must do in this procedure, log in to a Compute node and create an alias for the
rbd
command:$ alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
Confirm that you can write test data to the pool as a new object:
$ rbd create --size 1024 vms/foo
Confirm that you can see the test data:
$ rbd ls vms | grep foo
Delete the test data:
$ rbd rm vms/foo
If this procedure fails, contact your Ceph Storage administrator for assistance. If this procedure succeeds, but you cannot create Compute (nova) instances, Image service (glance) images, Block Storage (cinder) volumes, or Shared File Systems service (manila) shares, contact Red Hat Support.