Integrating an Overcloud with an Existing Red Hat Ceph Storage Cluster
Configuring an overcloud to use standalone Red Hat Ceph Storage
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Tell us how we can make it better.
Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback.
- Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback.
- Click the following link to open a the Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Integrating an overcloud with Ceph Storage Copy linkLink copied to clipboard!
Red Hat OpenStack Platform director creates a cloud environment called the overcloud. You can use director to configure extra features for an overcloud, such as integration with Red Hat Ceph Storage. You can integrate your overcloud with Ceph Storage clusters created with director or with existing Ceph Storage clusters.
For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.
1.1. Red Hat Ceph Storage compatibility Copy linkLink copied to clipboard!
RHOSP 16.2 supports connection to external Red Hat Ceph Storage 4 and Red Hat Ceph Storage 5 clusters.
1.2. Deploying the Shared File Systems service with external CephFS Copy linkLink copied to clipboard!
You can deploy the Shared File Systems service (manila) with CephFS by using Red Hat OpenStack Platform (RHOSP) director. You can use the Shared File Systems service with the NFS protocol or the native CephFS protocol.
You cannot use the Shared File Systems service with the CephFS native driver to serve shares to Red Hat OpenShift Container Platform through Manila CSI. Red Hat does not support this type of deployment. For more information, contact Red Hat Support.
The Shared File Systems service with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For more information about CSI workload recommendations, see https://access.redhat.com/articles/6667651.
To use native CephFS shared file systems, clients require access to the Ceph public network. When you integrate an overcloud with an existing Ceph Storage cluster, director does not create an isolated storage network to designate as the Ceph public network. This network is assumed to already exist. Do not provide direct access to the Ceph public network; instead, allow tenants to create a router to connect to the Ceph public network.
NFS-Ganesha gateway
When you use CephFS through the NFS protocol, director deploys the NFS-Ganesha gateway on Controller nodes managed by Pacemaker (PCS). PCS manages cluster availability by using an active-passive configuration.
The NFS-Ganesha gateway is supported with Red Hat Ceph Storage 4.x (Ceph package 14.x) and Red Hat Ceph Storage 5.x (Ceph package 16.x). For information about how to determine the Ceph Storage release installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.
You must install the latest version of the ceph-ansible package on the undercloud, as described in Installing the ceph-ansible package.
Prerequisites
Before you configure the Shared File Systems service with an external Ceph Storage cluster, complete the following prerequisites:
Verify that your external Ceph Storage cluster has an active Metadata Server (MDS):
ceph -s
$ ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow The external Ceph Storage cluster must have a CephFS file system that is supported by CephFS data and metadata pools.
Verify the pools in the CephFS file system:
ceph fs ls
$ ceph fs lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Note the names of these pools to configure the director parameters,
ManilaCephFSDataPoolNameandManilaCephFSMetadataPoolName. For more information about this configuration, see Creating a custom environment file.
The external Ceph Storage cluster must have a
cephxclient name and key for the Shared File Systems service.Verify the keyring:
ceph auth get client.<client name>
$ ceph auth get client.<client name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace <client name> with your
cephxclient name.
-
Replace <client name> with your
1.3. Configuring Ceph Object Store to use external Ceph Object Gateway Copy linkLink copied to clipboard!
Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).
For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.
Chapter 2. Preparing overcloud nodes Copy linkLink copied to clipboard!
The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see Red Hat Ceph Storage.
2.1. Verifying available Red Hat Ceph Storage packages Copy linkLink copied to clipboard!
To help avoid overcloud deployment failures, verify that the required packages exist on your servers.
2.1.1. Verifying the ceph-ansible package version Copy linkLink copied to clipboard!
The undercloud contains Ansible-based validations that you can run to identify potential problems before you deploy the overcloud. These validations can help you avoid overcloud deployment failures by identifying common problems before they happen.
Procedure
Verify that the
ceph-ansiblepackage version you want is installed:ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yaml
$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-ansible-installed.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.1.2. Verifying packages for pre-provisioned nodes Copy linkLink copied to clipboard!
Red Hat Ceph Storage (RHCS) can service only overcloud nodes that have a certain set of packages. When you use pre-provisioned nodes, you can verify the presence of these packages.
For more information about pre-provisioned nodes, see Configuring a basic overcloud with pre-provisioned nodes.
Procedure
Verify that the pre-provisioned nodes contain the required packages:
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yaml
ansible-playbook -i /usr/bin/tripleo-ansible-inventory /usr/share/ansible/validation-playbooks/ceph-dependencies-installed.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Configuring the existing Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed.
Procedure
Create the following pools in your Ceph Storage cluster, relevant to your environment:
Storage for OpenStack Block Storage (cinder):
ceph osd pool create volumes <pgnum>
[root@ceph ~]# ceph osd pool create volumes <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Storage for OpenStack Image Storage (glance):
ceph osd pool create images <pgnum>
[root@ceph ~]# ceph osd pool create images <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Storage for instances:
ceph osd pool create vms <pgnum>
[root@ceph ~]# ceph osd pool create vms <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Storage for OpenStack Block Storage Backup (cinder-backup):
ceph osd pool create backups <pgnum>
[root@ceph ~]# ceph osd pool create backups <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Storage for OpenStack Telemetry Metrics (gnocchi):
ceph osd pool create metrics <pgnum>
[root@ceph ~]# ceph osd pool create metrics <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this storage option only if metrics are enabled through OpenStack. If your overcloud deploys OpenStack Telemetry Metrics with CephFS, create CephFS data and metadata pools.
If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 4 (Ceph package 14) or earlier, create CephFS data and metadata pools:
ceph osd pool create manila_data <pgnum> ceph osd pool create manila_metadata <pgnum>
[root@ceph ~]# ceph osd pool create manila_data <pgnum> [root@ceph ~]# ceph osd pool create manila_metadata <pgnum>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<pgnum>with the number of placement groups. Red Hat recommends approximately 100 placement groups per OSD in the cluster, divided by the number of replicas (osd pool default size). For example, if there are 10 OSDs, and the cluster has theosd pool default sizeset to 3, use 333 placement groups. You can also use the Ceph Placement Groups (PGs) per Pool Calculator to determine a suitable value.- If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
Create a
client.openstackuser in your Ceph cluster with the following capabilities:- cap_mgr: allow *
- cap_mon: profile rbd
cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups,
ceph auth add client.openstack mgr allow * mon profile rbd osd profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups
[root@ceph ~]# ceph auth add client.openstack mgr allow * mon profile rbd osd profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Note the Ceph client key created for the
client.openstackuser:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
keyvalue in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.If your overcloud deploys the Shared File Systems service with CephFS, create the
client.manilauser in your Ceph Storage cluster with the following capabilities:- cap_mds: allow *
- cap_mgr: allow *
- cap_mon: allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"`
cap_osd: allow rw
ceph auth add client.manila mon allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'
[root@ceph ~]# ceph auth add client.manila mon allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"' osd 'allow rw' mds 'allow *' mgr 'allow *'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note the manila client name and the key value to use in overcloud deployment templates:
ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>[root@ceph ~]# ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the file system ID of your Ceph Storage cluster. This value is specified in the
fsidfield, under the[global]section of the configuration file for your cluster:[global] fsid = <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> ...
[global] fsid = <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file.
Additional resources
- Creating a custom environment file
- Red Hat Ceph Storage releases and corresponding Ceph package versions
- Ceph configuration in the Red Hat Ceph Storage Configuration Guide.
Chapter 3. Integrating with an existing Ceph Storage cluster Copy linkLink copied to clipboard!
To integrate Red Hat OpenStack Platform (RHOSP) with an existing Red Hat Ceph Storage cluster, you must install the ceph-ansible package. After that, you can create custom environment files to override and provide values for configuration options within OpenStack components.
3.1. Installing the ceph-ansible package Copy linkLink copied to clipboard!
The Red Hat OpenStack Platform director uses ceph-ansible to integrate with an existing Ceph Storage cluster, but ceph-ansible is not installed by default on the undercloud.
Procedure
Enter the following command to install the
ceph-ansiblepackage on the undercloud:sudo dnf install -y ceph-ansible
$ sudo dnf install -y ceph-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Creating a custom environment file Copy linkLink copied to clipboard!
Director supplies parameters to ceph-ansible to integrate with an external Red Hat Ceph Storage cluster through the environment file:
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml
If you deploy the Shared File Systems service (manila) with external CephFS, separate environment files supply additional parameters:
-
For native CephFS, the environment file is
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml. -
For CephFS through NFS, the environment file is
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml.
To configure integration of an existing Ceph Storage cluster with the overcloud, you must supply the details of your Ceph Storage cluster to director by using a custom environment file. Director invokes these environment files during deployment.
Procedure
Create a custom environment file:
/home/stack/templates/ceph-config.yamlAdd a
parameter_defaults:section to the file:parameter_defaults:
parameter_defaults:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
parameter_defaultsto set all of the parameters that you want to override in/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml. You must set the following parameters at a minimum:-
CephClientKey: The Ceph client key for theclient.openstackuser in your Ceph Storage cluster. This is the value ofkeyyou retrieved in Configuring the existing Ceph Storage cluster. For example,AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==. -
CephClusterFSID: The file system ID of your Ceph Storage cluster. This is the value offsidin your Ceph Storage cluster configuration file, which you retrieved in Configuring the existing Ceph Storage cluster. For example,4b5c8c0a-ff60-454b-a1b4-9747aa737d19. CephExternalMonHost: A comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster, for example,172.16.1.7, 172.16.1.8.For example:
parameter_defaults: CephClientKey: <AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==> CephClusterFSID: <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> CephExternalMonHost: <172.16.1.7, 172.16.1.8>
parameter_defaults: CephClientKey: <AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==> CephClusterFSID: <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> CephExternalMonHost: <172.16.1.7, 172.16.1.8>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Optional: You can override the Red Hat OpenStack Platform (RHOSP) client username and the following default pool names to match your Ceph Storage cluster:
-
CephClientUserName: <openstack> -
NovaRbdPoolName: <vms> -
CinderRbdPoolName: <volumes> -
GlanceRbdPoolName: <images> -
CinderBackupRbdPoolName: <backups> -
GnocchiRbdPoolName: <metrics>
-
Optional: If you are deploying the Shared File Systems service with CephFS, you can override the following default data and metadata pool names:
ManilaCephFSDataPoolName: <manila_data> ManilaCephFSMetadataPoolName: <manila_metadata>
ManilaCephFSDataPoolName: <manila_data> ManilaCephFSMetadataPoolName: <manila_metadata>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that these names match the names of the pools you created.
Set the client key that you created for the Shared File Systems service. You can override the default Ceph client username for that key:
ManilaCephFSCephFSAuthId: <manila> CephManilaClientKey: <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
ManilaCephFSCephFSAuthId: <manila> CephManilaClientKey: <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe default client username
ManilaCephFSCephFSAuthIdismanila, unless you override it.CephManilaClientKeyis always required.
After you create the custom environment file, you must include it when you deploy the overcloud.
Additional resources
3.3. Ceph containers for Red Hat OpenStack Platform with Ceph Storage Copy linkLink copied to clipboard!
To configure Red Hat OpenStack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha, you must have a Ceph container.
To be compatible with Red Hat Enterprise Linux 8, RHOSP 16 requires Red Hat Ceph Storage 4 or 5 (Ceph package 14.x or Ceph package 16.x). The Ceph Storage 4 and 5 containers are hosted at registry.redhat.io, a registry that requires authentication. For more information, see Container image preparation parameters.
3.4. Deploying the overcloud Copy linkLink copied to clipboard!
Deploy the overcloud with the environment file that you created.
Procedure
The creation of the overcloud requires additional arguments for the
openstack overcloud deploycommand:openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ -e /home/stack/templates/ceph-config.yaml \ -e --ntp-server pool.ntp.org \ ...
$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml \ -e /home/stack/templates/ceph-config.yaml \ -e --ntp-server pool.ntp.org \ ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example command uses the following options:
-
--templates- Creates the overcloud from the default heat template collection,/usr/share/openstack-tripleo-heat-templates/. -
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml- Sets the director to integrate an existing Ceph cluster to the overcloud. -
-e /home/stack/templates/ceph-config.yaml- Adds a custom environment file to override the defaults set by-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible-external.yaml. In this case, it is the custom environment file you created in Installing the ceph-ansible package. -
--ntp-server pool.ntp.org- Sets the NTP server.
3.4.1. Adding environment files for the Shared File Systems service with CephFS Copy linkLink copied to clipboard!
If you deploy an overcloud that uses the Shared File Systems service (manila) with CephFS, you must add additional environment files.
Procedure
Create and add additional environment files:
-
If you deploy an overcloud that uses the native CephFS back-end driver, add
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml. If you deploy an overcloud that uses CephFS through NFS, add
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml.Red Hat recommends that you deploy the Ceph-through-NFS driver with an isolated StorageNFS network where shares are exported. You must deploy the isolated network to overcloud controller nodes. To enable this deployment, director includes the following file and role:
-
An example custom network configuration file that includes the StorageNFS network (
/usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml). Review and customize this file as necessary. - A ControllerStorageNFS role.
-
An example custom network configuration file that includes the StorageNFS network (
-
If you deploy an overcloud that uses the native CephFS back-end driver, add
Modify the
openstack overcloud deploycommand depending on the CephFS back end that you use.For native CephFS:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For CephFS through NFS:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe custom
ceph-config.yamlenvironment file overrides parameters in theceph-ansible-external.yamlfile and either themanila-cephfsnative-config.yamlfile or themanila-cephfsganesha-config.yamlfile. Therefore, include the customceph-config.yamlenvironment file in the deployment command afterceph-ansible-external.yamland eithermanila-cephfsnative-config.yamlormanila-cephfsganesha-config.yaml.Example environment file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<cluster_ID>,<IP_address>, and<client_key>with values that are suitable for your environment.
-
Replace
Additional resources
- For more information about generating a custom roles file, see Deploying the Shared File Systems service with CephFS through NFS.
3.4.2. Adding an additional environment file for external Ceph Object Gateway (RGW) for Object storage Copy linkLink copied to clipboard!
If you deploy an overcloud that uses an already existing RGW service for Object storage, you must add an additional environment file.
Procedure
Add the following
parameter_defaultsto a custom environment file, for example,swift-external-params.yaml, and adjust the values to suit your deployment:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe example code snippet contains parameter values that might differ from values that you use in your environment:
-
The default port where the remote RGW instance listens is
8080. The port might be different depending on how the external RGW is configured. -
The
swiftuser created in the overcloud uses the password defined by theSwiftPasswordparameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using thergw_keystone_admin_password.
-
The default port where the remote RGW instance listens is
Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDirector creates the following roles and users in the Identity service by default:
- rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
- rgw_keystone_admin_domain: default
- rgw_keystone_admin_project: service
- rgw_keystone_admin_user: swift
Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Verifying external Red Hat Ceph Storage cluster integration Copy linkLink copied to clipboard!
After you deploy the overcloud, confirm that Red Hat OpenStack Platform (RHOSP) services can write to the Red Hat Ceph Storage cluster.
RHOSP does not support the use of Ceph clone format v2 or later. Deleting images or volumes from a Ceph Storage cluster that has Ceph clone format v2 enabled might cause unpredictable behavior and potential loss of data. Therefore, do not use either of the following methods that enable Ceph clone format v2:
-
Setting
rbd default clone format = 2 -
Running
ceph osd set-require-min-compat-client mimic
4.1. Gathering IDs Copy linkLink copied to clipboard!
To verify that you integrated a Red Hat Ceph Storage cluster, you must first create an image, a Compute instance, a Block Storage volume, and a file share and gather their respective IDs.
Procedure
- Create an image with the Image service (glance). For more information about how to create an image, see Import an image in the Creating and Managing Images guide.
- Record the image ID for later use.
- Create a Compute (nova) instance. For more information about how to create an instance, see Creating an instance in the Creating and Managing Instances guide.
- Record the instance ID for later use.
- Create a Block Storage (cinder) volume. For more information about how to create a Block Storage volume, see Create a volume in the Storage Guide.
- Record the volume ID for later use.
- Create a file share by using the Shared File Systems service (manila). For more information about how to create a file share, see Creating a share in the Storage Guide.
- List the export path of the share and record the UUID in the suffix for later use. For more information about how to list the export path of the share, see Listing shares and exporting information in the Storage Guide.
4.2. Verifying the Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
When you configure an external Red Hat Ceph Storage cluster, you create pools and a client.openstack user to access those pools. After you deploy the overcloud, you can use the file that contains the credentials of the client.openstack user to list the contents of Red Hat OpenStack Platform (RHOSP) pools.
List the contents of the pools and confirm that the IDs of the Image service (glance) image, the Compute (nova) instance, the Block Storage (cinder) volume, and the Shared File Systems service (manila) file share exist on the Ceph Storage cluster.
Procedure
Log in to the undercloud as the
stackuser and source thestackrccredentials file:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow List the available servers to retrieve the IP addresses of nodes on the system:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use SSH to log in to any Compute node:
ssh heat-admin@192.168.24.31
$ ssh heat-admin@192.168.24.31Copy to Clipboard Copied! Toggle word wrap Toggle overflow Switch to the root user:
sudo su -
[heat-admin@compute-0 ~]$ sudo su -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the files
/etc/ceph/ceph.confand/etc/ceph/ceph.client.openstack.keyringexist:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to force the
nova_computecontainer to use therbdcommand to list the contents of the appropriate pool.podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms
# podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The pool name must match the pool names of the images, VMs, volumes, and shares that you created when you configured the Ceph Storage cluster. The IDs of the image, Compute instance, volume, and share must match the IDs that you recorded in Gathering IDs.
NoteThe example command is prefixed with
podman exec nova_computebecause/usr/bin/rbd, which is provided by theceph-common package, is not installed on overcloud nodes by default. However, it is available in thenova_computecontainer. The command lists block device images. For more information about listing block device images, see Listing the block device images in the Ceph Storage Block Device Guide.The following examples show how to confirm whether an ID for each service is present for each pool by using the IDs from Gathering IDs.
podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4
# podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f # podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify the existence of the Shared File Systems service share, you must log into a Controller node:
podman exec openstack-manila-share-podman-0 ceph -n client.manila fs subvolume ls cephfs | grep ec99db3c-0077-40b7-b09e-8a110e3f73c1
# podman exec openstack-manila-share-podman-0 ceph -n client.manila fs subvolume ls cephfs | grep ec99db3c-0077-40b7-b09e-8a110e3f73c1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3. Troubleshooting failed verification Copy linkLink copied to clipboard!
If the verification procedures fail, verify that the Ceph key for the openstack.client user and the Red Hat Ceph Storage monitor IPs or hostnames can be used together to read, write, and delete from the Ceph Storage pools that you created for the Red Hat OpenStack Platform (RHOSP).
Procedure
To shorten the amount of typing you must do in this procedure, log in to a Compute node and create an alias for the
rbdcommand:alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
$ alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that you can write test data to the pool as a new object:
rbd create --size 1024 vms/foo
$ rbd create --size 1024 vms/fooCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that you can see the test data:
rbd ls vms | grep foo
$ rbd ls vms | grep fooCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the test data:
rbd rm vms/foo
$ rbd rm vms/fooCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If this procedure fails, contact your Ceph Storage administrator for assistance. If this procedure succeeds, but you cannot create Compute (nova) instances, Image service (glance) images, Block Storage (cinder) volumes, or Shared File Systems service (manila) shares, contact Red Hat Support.