Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 4. Verifying external Ceph Storage cluster integration

download PDF

After you deploy the overcloud, confirm that Red Hat OpenStack Platform (RHOSP) services can write to the Red Hat Ceph Storage cluster.

4.1. Gathering IDs

To verify that you integrated a Red Hat Ceph Storage cluster, you must first create an image, a Compute instance, a Block Storage volume, and a file share and gather their respective IDs.

Procedure

  1. Create an image by using the Image service (glance). For more information about how to create an image, see Importing an image in Creating and managing images.
  2. Record the image ID for later use.
  3. Create a Compute (nova) instance. For more information about how to create an instance, see Creating an instance in the Creating and managing instances guide.
  4. Record the instance ID for later use.
  5. Create a Block Storage (cinder) volume. FFor more information about how to create a Block Storage volume, see Creating Block Storage volumes in the Configuring persistent storage guide.
  6. Record the volume ID for later use.
  7. Create a file share by using the Shared File Systems service (manila).
  8. List the export path of the share and record the UUID in the suffix for later use.

For more information about how to create file shares and list the export path of a share, see Performing operations with the Shared File Systems service (manila) in the Configuring persistent storage guide.

4.2. Verifying the Red Hat Ceph Storage cluster

When you configure an external Red Hat Ceph Storage cluster, you create pools and a client.openstack user to access those pools. After you deploy the overcloud, you can use the file that contains the credentials of the client.openstack user to list the contents of Red Hat OpenStack Platform (RHOSP) pools.

List the contents of the pools and confirm that the IDs of the Image service (glance) image, the Compute (nova) instance, the Block Storage (cinder) volume, and the Shared File Systems service (manila) file share exist on the Ceph Storage cluster.

Procedure

  1. Log in to the undercloud as the stack user and source the stackrc credentials file:

    $ source ~/stackrc
  2. List the available servers to retrieve the IP addresses of nodes on the system:

    $ metalsmith list
    
    +---------------+----------------+---------------+
    | ID | Name | Status | Networks | Image | Flavor |
    +---------------+----------------+---------------+
    | d5a621bd-d109-41ae-a381-a42414397802 | compute-0 | ACTIVE | ctlplane=192.168.24.31 | overcloud-full | compute |
    | 496ab196-d6cb-447d-a118-5bafc5166cf2 | controller-0 | ACTIVE | ctlplane=192.168.24.37 | overcloud-full | controller |
    | c01e730d-62f2-426a-a964-b31448f250b3 | controller-2 | ACTIVE | ctlplane=192.168.24.55 | overcloud-full | controller |
    | 36df59b3-66f3-452e-9aec-b7e7f7c54b86 | controller-1 | ACTIVE | ctlplane=192.168.24.39 | overcloud-full | controller |
    | f8f00497-246d-4e40-8a6a-b5a60fa66483 | compute-1 | ACTIVE | ctlplane=192.168.24.10 | overcloud-full | compute |
  3. Use SSH to log in to any Compute node:

    $ ssh tripleo-admin@192.168.24.31
  4. Confirm that the files ceph.conf and /ceph.client.openstack.keyring exist in the CephConfigPath provided by director. This path is /var/lib/tripleo-config/ceph by default but an override might exist.

    [tripleo-admin@compute-0 ~]$ sudo ls -l /var/lib/tripleo-config/ceph/ceph.conf
    
    -rw-r--r--. 1 root root 1170 Sep 29 23:25 /var/lib/tripleo-config/ceph/ceph.conf
    
    [tripleo-admin@compute-0 ~]$ sudo ls -l /var/lib/tripleo-config/ceph/ceph.client.openstack.keyring
    
    -rw-------. 1 ceph ceph 253 Sep 29 23:25 /var/lib/tripleo-config/ceph/ceph.client.openstack.keyring
  5. Enter the following command to force the nova_compute container to use the rbd command to list the contents of the appropriate pool.

    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms

    The pool name must match the pool names of the images, VMs, volumes, and shares that you created when you configured the Ceph Storage cluster. The IDs of the image, Compute instance, volume, and share must match the IDs that you recorded in Gathering IDs.

    Note

    The example command is prefixed with podman exec nova_compute because /usr/bin/rbd, which is provided by the ceph-common package, is not installed on overcloud nodes by default. However, it is available in the nova_compute container. The command lists block device images. For more information about listing block device images, see Listing the block device images in the Red Hat Ceph Storage Block Device Guide.

    The following examples show how to confirm whether an ID for each service is present for each pool by using the IDs from Gathering IDs.

    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b
    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f
    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4
  6. To verify the existence of the Shared File Systems service share, you must log into a Controller node:

    $ sudo podman exec openstack-manila-share-podman-0 ceph -n client.manila fs subvolume ls cephfs | grep ec99db3c-0077-40b7-b09e-8a110e3f73c1

4.3. Troubleshooting failed verification

If the verification procedures fail, verify that the Ceph key for the openstack.client user and the Red Hat Ceph Storage monitor IPs or hostnames can be used together to read, write, and delete from the Ceph Storage pools that you created for the Red Hat OpenStack Platform (RHOSP).

Procedure

  1. To shorten the amount of typing you must do in this procedure, log in to a Compute node and create an alias for the rbd command:

    $ alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
  2. Confirm that you can write test data to the pool as a new object:

    $ rbd create --size 1024 vms/foo
  3. Confirm that you can see the test data:

    $ rbd ls vms | grep foo
  4. Delete the test data:

    $ rbd rm vms/foo
Note

If this procedure fails, contact your Ceph Storage administrator for assistance. If this procedure succeeds, but you cannot create Compute (nova) instances, Image service (glance) images, Block Storage (cinder) volumes, or Shared File Systems service (manila) shares, contact Red Hat Support.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.