Chapter 4. Verify external Ceph Storage cluster integration


After you deploy the overcloud, confirm that Red Hat OpenStack Platform (RHOSP) services can write to the Ceph Storage cluster.

Warning

RHOSP does not support the use of Ceph clone format v2 or later. Deleting images or volumes from a Ceph cluster that has Ceph clone format v2 enabled might cause unpredictable behavior and potential loss of data. Therefore, do not use either of the following methods that enable Ceph clone format v2:

  • Setting rbd default clone format = 2
  • Running ceph osd set-require-min-compat-client mimic

4.1. Gathering IDs

To verify that you integrated a Ceph Storage cluster, you must first create an image, a Compute instance, and a volume and gather their respective IDs.

Procedure

  1. Create an image with the Image service (glance).

    For more information about how to create an image, see Import an image in the Creating and Managing Images guide.

  2. Record the glance image ID for later use.
  3. Create a Compute (nova) instance.

    For more information about how to create an instance, see Launch an instance in the Creating and Managing Instances guide.

  4. Record the nova instance ID for later use.
  5. Create a Block Storage (cinder) volume.

    For more information about how to create a Block Storage volume, see Create a volume in the Storage Guide.

  6. Record the cinder volume ID for later use.

4.2. Verifying the Ceph Storage cluster

When you configure an external Ceph Storage cluster, you create pools and a client.openstack user to access those pools. After you deploy the overcloud, you can use the file that contains the credentials of the client.openstack user to list the contents of Red Hat OpenStack Platform (RHOSP) pools.

List the contents of the pools and confirm that the IDs of the glance image, Compute instance, and cinder volume exist on the Ceph Storage cluster.

Procedure

  1. Source the undercloud credentials:

    [stack@undercloud-0 ~]$ source stackrc
    Copy to Clipboard Toggle word wrap
  2. List the available servers to retrieve the IP addresses of nodes on the system:

    (undercloud) [stack@undercloud-0 ~]$ openstack server list
    
    +---------------+----------------+---------------+
    | ID | Name | Status | Networks | Image | Flavor |
    +---------------+----------------+---------------+
    | d5a621bd-d109-41ae-a381-a42414397802 | compute-0 | ACTIVE | ctlplane=192.168.24.31 | overcloud-full | compute |
    | 496ab196-d6cb-447d-a118-5bafc5166cf2 | controller-0 | ACTIVE | ctlplane=192.168.24.37 | overcloud-full | controller |
    | c01e730d-62f2-426a-a964-b31448f250b3 | controller-2 | ACTIVE | ctlplane=192.168.24.55 | overcloud-full | controller |
    | 36df59b3-66f3-452e-9aec-b7e7f7c54b86 | controller-1 | ACTIVE | ctlplane=192.168.24.39 | overcloud-full | controller |
    | f8f00497-246d-4e40-8a6a-b5a60fa66483 | compute-1 | ACTIVE | ctlplane=192.168.24.10 | overcloud-full | compute |
    Copy to Clipboard Toggle word wrap
  3. Use SSH to log in to any Compute node:

    (undercloud) [stack@undercloud-0 ~]$ ssh heat-admin@192.168.24.31
    Copy to Clipboard Toggle word wrap
  4. Switch to the root user:

    [heat-admin@compute-0 ~]$ sudo su -
    Copy to Clipboard Toggle word wrap
  5. Confirm that the files /etc/ceph/ceph.conf and /etc/ceph/ceph.client.openstack.keyring exist:

    [root@compute-0 ~]# ls -l /etc/ceph/ceph.conf
    
    -rw-r--r--. 1 root root 1170 Sep 29 23:25 /etc/ceph/ceph.conf
    [root@compute-0 ~]# ls -l /etc/ceph/ceph.client.openstack.keyring
    
    -rw-------. 1 ceph ceph 253 Sep 29 23:25 /etc/ceph/ceph.client.openstack.keyring
    Copy to Clipboard Toggle word wrap
  6. Enter the following command to force the nova_compute container to use the rbd command to list the contents of the appropriate pool.

    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms
    Copy to Clipboard Toggle word wrap

    The pool name must match the pool names of the images, VMs, and volumes that you created when you configured the Ceph Storage cluster. For more information, see Configuring the existing Ceph Storage cluster. The IDs of the image, Compute instance, and volume must match the IDs that you recorded in Section 4.1, “Gathering IDs”.

    Note

    The example command is prefixed with podman exec nova_compute because /usr/bin/rbd, which is provided by the ceph-common package, is not installed on overcloud nodes by default. However, it is available in the nova_compute container. The command lists block device images. For more information, see Listing the block device images in the Ceph Storage Block Device Guide.

    The following examples show how to confirm whether an ID for each service is present for each pool by using the IDs from Section 4.1, “Gathering IDs”.

    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b
    
    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f
    
    # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4
    Copy to Clipboard Toggle word wrap

4.3. Troubleshooting failed verification

If the verification procedures fail, verify that the Ceph key for the openstack.client user and the Ceph Storage monitor IPs or hostnames can be used together to read, write, and delete from the Ceph Storage pools that you created for the Red Hat OpenStack Platform (RHOSP).

Procedure

  1. To shorten the amount of typing you must do in this procedure, log in to a Compute node and create an alias for the rbd command:

    # alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
    Copy to Clipboard Toggle word wrap
  2. Confirm that you can write test data to the pool as a new object:

    # rbd create --size 1024 vms/foo
    Copy to Clipboard Toggle word wrap
  3. Confirm that you can see the test data:

    # rbd ls vms | grep foo
    Copy to Clipboard Toggle word wrap
  4. Delete the test data:

    # rbd rm vms/foo
    Copy to Clipboard Toggle word wrap
Note

If this procedure fails, contact your Ceph Storage administrator for assistance. If this procedure succeeds, but you cannot create Compute instances, glance images, or cinder volumes, contact Red Hat Support.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat