Integrating an Overcloud with an Existing Red Hat Ceph Storage Cluster
Configuring an overcloud to use standalone Red Hat Ceph Storage
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Integrating an overcloud with Ceph Storage
Red Hat OpenStack Platform director creates a cloud environment called the overcloud. You can use director to configure extra features for an overcloud, such as integration with Red Hat Ceph Storage. You can integrate your overcloud with Ceph Storage clusters created with director or with existing Ceph Storage clusters. The default integration with Ceph configures the Image service (glance), the Block Storage service (cinder), and the Compute service (nova) to use block storage over the Rados Block Device (RBD) protocol. Additional integration options for File and Object storage might also be included.
For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.
1.1. Deploying the Shared File Systems service with external CephFS
You can deploy the Shared File Systems service (manila) with CephFS by using Red Hat OpenStack Platform (RHOSP) director. You can use the Shared File Systems service with the NFS protocol or the native CephFS protocol.
You cannot use the Shared File Systems service with the CephFS native driver to serve shares to Red Hat OpenShift Container Platform through Manila CSI. Red Hat does not support this type of deployment. For more information, contact Red Hat Support.
The Shared File Systems service with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For more information about CSI workload recommendations, see https://access.redhat.com/articles/6667651.
To use native CephFS shared file systems, clients require access to the Ceph public network. When you integrate an overcloud with an existing Ceph Storage cluster, director does not create an isolated storage network to designate as the Ceph public network. This network is assumed to already exist. Do not provide direct access to the Ceph public network; instead, allow tenants to create a router to connect to the Ceph public network.
NFS-Ganesha gateway
When you use CephFS through the NFS protocol, director deploys the NFS-Ganesha gateway on Controller nodes managed by Pacemaker (PCS). PCS manages cluster availability by using an active-passive configuration.
The NFS-Ganesha gateway is supported with Red Hat Ceph Storage 4.x (Ceph package 14.x) and Red Hat Ceph Storage 5.x (Ceph package 16.x). For information about how to determine the Ceph Storage release installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.
Prerequisites
Before you configure the Shared File Systems service with an external Ceph Storage cluster, complete the following prerequisites:
Verify that your external Ceph Storage cluster has an active Metadata Server (MDS):
$ ceph -s
The external Ceph Storage cluster must have a CephFS file system that is supported by CephFS data and metadata pools.
Verify the pools in the CephFS file system:
$ ceph fs ls
-
Note the names of these pools to configure the director parameters,
ManilaCephFSDataPoolName
andManilaCephFSMetadataPoolName
. For more information about this configuration, see Creating a custom environment file.
The external Ceph Storage cluster must have a
cephx
client name and key for the Shared File Systems service.Verify the keyring:
$ ceph auth get client.<client name>
-
Replace
<client name>
with yourcephx
client name.
-
Replace
1.2. Configuring Ceph Object Store to use external Ceph Object Gateway
Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).
For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.
Chapter 2. Preparing overcloud nodes
The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see the product documentation for Red Hat Ceph Storage.
2.1. Configuring the existing Red Hat Ceph Storage cluster
To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed.
Procedure
- Log in to the external Ceph admin node.
Open an interactive shell to access Ceph commands:
[user@ceph ~]$ sudo cephadm shell
Create the following RADOS Block Device (RBD) pools in your Ceph Storage cluster, relevant to your environment:
Storage for OpenStack Block Storage (cinder):
$ ceph osd pool create volumes <pgnum>
Storage for OpenStack Image Storage (glance):
$ ceph osd pool create images <pgnum>
Storage for instances:
$ ceph osd pool create vms <pgnum>
Storage for OpenStack Block Storage Backup (cinder-backup):
$ ceph osd pool create backups <pgnum>
- If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
Create a
client.openstack
user in your Ceph Storage cluster with the following capabilities:-
cap_mgr:
allow *
-
cap_mon:
profile rbd
cap_osd:
profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups
$ ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups'
-
cap_mgr:
Note the Ceph client key created for the
client.openstack
user:$ ceph auth list ... [client.openstack] key = <AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==> caps mgr = "allow *" caps mon = "profile rbd" caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups" ...
-
The
key
value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.
-
The
If your overcloud deploys the Shared File Systems service with CephFS, create the
client.manila
user in your Ceph Storage cluster. The capabilities required for theclient.manila
user depend on whether your deployment exposes CephFS shares through the native CephFS protocol or the NFS protocol.If you expose CephFS shares through the native CephFS protocol, the following capabilities are required:
-
cap_mgr:
allow rw
cap_mon:
allow r
$ ceph auth add client.manila mgr 'allow rw' mon 'allow r'
-
cap_mgr:
If you expose CephFS shares through the NFS protocol, the following capabilities are required:
-
cap_mgr:
allow rw
-
cap_mon:
allow r
cap_osd:
allow rw pool=manila_data
The specified pool name must be the value set for the
ManilaCephFSDataPoolName
parameter, which defaults tomanila_data
.$ ceph auth add client.manila mgr 'allow rw' mon 'allow r' osd 'allow rw pool=manila_data'
-
cap_mgr:
Note the manila client name and the key value to use in overcloud deployment templates:
$ ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
Note the file system ID of your Ceph Storage cluster. This value is specified in the
fsid
field, under the[global]
section of the configuration file for your cluster:[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19 ...
Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file.
Chapter 3. Integrating with an existing Red Hat Ceph Storage cluster
Use the procedures and information in this section to integrate Red Hat OpenStack Platform (RHOSP) with an existing Red Hat Ceph Storage cluster. You can create custom environment files to override and provide values for configuration options within OpenStack components.
3.1. Creating a custom environment file
Director supplies parameters to tripleo-ansible
to integrate with an external Red Hat Ceph Storage cluster through the environment file:
-
/usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml
If you deploy the Shared File Systems service (manila) with external CephFS, separate environment files supply additional parameters.
-
For native CephFS, the environment file is
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml
. -
For CephFS through NFS, the environment file is
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
.
To configure integration of an existing Ceph Storage cluster with the overcloud, you must supply the details of your Ceph Storage cluster to director by using a custom environment file. Director invokes these environment files during deployment.
Procedure
Create a custom environment file:
/home/stack/templates/ceph-config.yaml
Add a
parameter_defaults:
section to the file:parameter_defaults:
Use
parameter_defaults
to set all of the parameters that you want to override in/usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml
. You must set the following parameters at a minimum:-
CephClientKey
: The Ceph client key for theclient.openstack
user in your Ceph Storage cluster. This is the value ofkey
that you retrieved in Configuring the existing Ceph Storage cluster. For example,AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==
. -
CephClusterFSID
: The file system ID of your Ceph Storage cluster. This is the value offsid
in your Ceph Storage cluster configuration file, which you retrieved in Configuring the existing Ceph Storage cluster. For example,4b5c8c0a-ff60-454b-a1b4-9747aa737d19
. CephExternalMonHost
: A comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster, for example,172.16.1.7, 172.16.1.8
.For example:
parameter_defaults: CephClientKey: <AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==> CephClusterFSID: <4b5c8c0a-ff60-454b-a1b4-9747aa737d19> CephExternalMonHost: <172.16.1.7, 172.16.1.8, 172.16.1.9>
-
Optional: You can override the Red Hat OpenStack Platform (RHOSP) client username and the following default pool names to match your Ceph Storage cluster:
-
CephClientUserName: <openstack>
-
NovaRbdPoolName: <vms>
-
CinderRbdPoolName: <volumes>
-
GlanceRbdPoolName: <images>
-
CinderBackupRbdPoolName: <backups>
-
Optional: If you are deploying the Shared File Systems service with CephFS, you can override the following default data and metadata pool names:
ManilaCephFSDataPoolName: <manila_data> ManilaCephFSMetadataPoolName: <manila_metadata>
NoteEnsure that these names match the names of the pools you created.
Set the client key that you created for the Shared File Systems service. You can override the default Ceph client username for that key:
ManilaCephFSCephFSAuthId: <manila> CephManilaClientKey: <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
NoteThe default client username
ManilaCephFSCephFSAuthId
ismanila
, unless you override it.CephManilaClientKey
is always required.
After you create the custom environment file, you must include it when you deploy the overcloud.
Additional resources
3.2. Ceph containers for Red Hat OpenStack Platform with Red Hat Ceph Storage
You must have a Ceph Storage container to configure Red Hat Openstack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha. You do not require a Ceph Storage container if the external Ceph Storage cluster only provides Block (through RBD), Object (through RGW), or File (through native CephFS) storage.
RHOSP 17.0 requires Red Hat Ceph Storage 5.x (Ceph package 16.x) or later to be compatible with Red Hat Enterprise Linux 9. The Ceph Storage 5.x containers are hosted at registry.redhat.io
, a registry that requires authentication. For more information, see Container image preparation parameters.
3.3. Deploying the overcloud
Deploy the overcloud with the environment file that you created.
Procedure
The creation of the overcloud requires additional arguments for the
openstack overcloud deploy
command:$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \ -e /home/stack/templates/ceph-config.yaml \ -e --ntp-server pool.ntp.org \ ...
This example command uses the following options:
--templates
- Creates the overcloud from the default heat template collection,/usr/share/openstack-tripleo-heat-templates/
.-
-e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml
- Sets the director to integrate an existing Ceph Storage cluster to the overcloud. -
-e /home/stack/templates/ceph-config.yaml
- Adds a custom environment file to override the defaults set by-e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml
. -
--ntp-server pool.ntp.org
- Sets the NTP server.
-
3.3.1. Adding environment files for the Shared File Systems service with CephFS
If you deploy an overcloud that uses the Shared File Systems service (manila) with CephFS, you must add additional environment files.
Procedure
Create and add additional environment files:
-
If you deploy an overcloud that uses the native CephFS back-end driver, add
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml
. If you deploy an overcloud that uses CephFS through NFS, add
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
.Red Hat recommends that you deploy the Ceph-through-NFS driver with an isolated StorageNFS network where shares are exported. You must deploy the isolated network to overcloud Controller nodes. To enable this deployment, director includes the following file and role:
- An example custom network configuration file that includes the StorageNFS network (/usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml). Review and customize this file as necessary.
- A ControllerStorageNFS role.
-
If you deploy an overcloud that uses the native CephFS back-end driver, add
Modify the
openstack overcloud deploy
command depending on the CephFS back end that you use.For native CephFS:
$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml \ -e /home/stack/templates/ceph-config.yaml \ -e --ntp-server pool.ntp.org ...
For CephFS through NFS:
$ openstack overcloud deploy --templates \ -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml \ -r /home/stack/custom_roles.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml \ -e /home/stack/templates/ceph-config.yaml \ -e --ntp-server pool.ntp.org ...
NoteThe custom
ceph-config.yaml
environment file overrides parameters in theexternal-ceph.yaml
file and either themanila-cephfsnative-config.yaml
file or themanila-cephfsganesha-config.yaml
file. Therefore, include the customceph-config.yaml
environment file in the deployment command afterexternal-ceph.yaml
and eithermanila-cephfsnative-config.yaml
ormanila-cephfsganesha-config.yaml
.Example environment file
parameter_defaults: CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderEnableNfsBackend: false NovaEnableRbdBackend: true GlanceBackend: rbd CinderRbdPoolName: "volumes" NovaRbdPoolName: "vms" GlanceRbdPoolName: "images" CinderBackupRbdPoolName: "backups" CephClusterFSID: <cluster_ID> CephExternalMonHost: <IP_address>,<IP_address>,<IP_address> CephClientKey: "<client_key>" CephClientUserName: "openstack" ManilaCephFSDataPoolName: manila_data ManilaCephFSMetadataPoolName: manila_metadata ManilaCephFSCephFSAuthId: 'manila' CephManilaClientKey: '<client_key>' ExtraConfig:
-
Replace
<cluster_ID>
,<IP_address>
, and<client_key>
with values that are suitable for your environment.
-
Replace
Additional resources
- For more information about generating a custom roles file, see Deploying the Shared File Systems service with CephFS through NFS.
3.3.2. Adding an additional environment file for external Ceph Object Gateway (RGW) for Object storage
If you deploy an overcloud that uses an already existing RGW service for Object storage, you must add an additional environment file.
Procedure
Add the following
parameter_defaults
to a custom environment file, for example,swift-external-params.yaml
, and adjust the values to suit your deployment:parameter_defaults: ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s' ExternalSwiftUserTenant: 'service' SwiftPassword: 'choose_a_random_password'
NoteThe example code snippet contains parameter values that might differ from values that you use in your environment:
-
The default port where the remote RGW instance listens is
8080
. The port might be different depending on how the external RGW is configured. -
The
swift
user created in the overcloud uses the password defined by theSwiftPassword
parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using thergw_keystone_admin_password
.
-
The default port where the remote RGW instance listens is
Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:
rgw_keystone_api_version = 3 rgw_keystone_url = http://<public Keystone endpoint>:5000/ rgw_keystone_accepted_roles = member, Member, admin rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator rgw_keystone_admin_domain = default rgw_keystone_admin_project = service rgw_keystone_admin_user = swift rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters> rgw_keystone_implicit_tenants = true rgw_keystone_revocation_interval = 0 rgw_s3_auth_use_keystone = true rgw_swift_versioning_enabled = true rgw_swift_account_in_url = true rgw_max_attr_name_len = 128 rgw_max_attrs_num_in_req = 90 rgw_max_attr_size = 256 rgw_keystone_verify_ssl = false
NoteDirector creates the following roles and users in the Identity service by default:
- rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
- rgw_keystone_admin_domain: default
- rgw_keystone_admin_project: service
- rgw_keystone_admin_user: swift
Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:
openstack overcloud deploy --templates \ -e <your_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml -e swift-external-params.yaml
Chapter 4. Verifying external Ceph Storage cluster integration
After you deploy the overcloud, confirm that Red Hat OpenStack Platform (RHOSP) services can write to the Red Hat Ceph Storage cluster.
4.1. Gathering IDs
To verify that you integrated a Red Hat Ceph Storage cluster, you must first create an image, a Compute instance, a Block Storage volume, and a file share and gather their respective IDs.
Procedure
- Create an image by using the Image service (glance). For more information about how to create an image, see Importing an image in Creating and Managing Images.
- Record the image ID for later use.
- Create a Compute (nova) instance. For more information about how to create an instance, see Creating an instance in the Creating and Managing Instances guide.
- Record the instance ID for later use.
- Create a Block Storage (cinder) volume. FFor more information about how to create a Block Storage volume, see Creating Block Storage volumes in the Storage Guide.
- Record the volume ID for later use.
- Create a file share by using the Shared File Systems service (manila).
- List the export path of the share and record the UUID in the suffix for later use.
For more information about how to create file shares and list the export path of a share, see Performing operations with the Shared File Systems service (manila) in the Storage Guide.
4.2. Verifying the Red Hat Ceph Storage cluster
When you configure an external Red Hat Ceph Storage cluster, you create pools and a client.openstack
user to access those pools. After you deploy the overcloud, you can use the file that contains the credentials of the client.openstack
user to list the contents of Red Hat OpenStack Platform (RHOSP) pools.
List the contents of the pools and confirm that the IDs of the Image service (glance) image, the Compute (nova) instance, the Block Storage (cinder) volume, and the Shared File Systems service (manila) file share exist on the Ceph Storage cluster.
Procedure
Log in to the undercloud as the
stack
user and source thestackrc
credentials file:$ source ~/stackrc
List the available servers to retrieve the IP addresses of nodes on the system:
$ metalsmith list +---------------+----------------+---------------+ | ID | Name | Status | Networks | Image | Flavor | +---------------+----------------+---------------+ | d5a621bd-d109-41ae-a381-a42414397802 | compute-0 | ACTIVE | ctlplane=192.168.24.31 | overcloud-full | compute | | 496ab196-d6cb-447d-a118-5bafc5166cf2 | controller-0 | ACTIVE | ctlplane=192.168.24.37 | overcloud-full | controller | | c01e730d-62f2-426a-a964-b31448f250b3 | controller-2 | ACTIVE | ctlplane=192.168.24.55 | overcloud-full | controller | | 36df59b3-66f3-452e-9aec-b7e7f7c54b86 | controller-1 | ACTIVE | ctlplane=192.168.24.39 | overcloud-full | controller | | f8f00497-246d-4e40-8a6a-b5a60fa66483 | compute-1 | ACTIVE | ctlplane=192.168.24.10 | overcloud-full | compute |
Use SSH to log in to any Compute node:
$ ssh tripleo-admin@192.168.24.31
Confirm that the files
ceph.conf
and/ceph.client.openstack.keyring
exist in theCephConfigPath
provided by director. This path is/var/lib/tripleo-config/ceph
by default but an override might exist.[tripleo-admin@compute-0 ~]$ sudo ls -l /var/lib/tripleo-config/ceph/ceph.conf -rw-r--r--. 1 root root 1170 Sep 29 23:25 /var/lib/tripleo-config/ceph/ceph.conf [tripleo-admin@compute-0 ~]$ sudo ls -l /var/lib/tripleo-config/ceph/ceph.client.openstack.keyring -rw-------. 1 ceph ceph 253 Sep 29 23:25 /var/lib/tripleo-config/ceph/ceph.client.openstack.keyring
Enter the following command to force the
nova_compute
container to use therbd
command to list the contents of the appropriate pool.$ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms
The pool name must match the pool names of the images, VMs, volumes, and shares that you created when you configured the Ceph Storage cluster. The IDs of the image, Compute instance, volume, and share must match the IDs that you recorded in Gathering IDs.
NoteThe example command is prefixed with
podman exec nova_compute
because/usr/bin/rbd
, which is provided by theceph-common package
, is not installed on overcloud nodes by default. However, it is available in thenova_compute
container. The command lists block device images. For more information about listing block device images, see Listing the block device images in the Red Hat Ceph Storage Block Device Guide.The following examples show how to confirm whether an ID for each service is present for each pool by using the IDs from Gathering IDs.
$ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b
$ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f
$ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4
To verify the existence of the Shared File Systems service share, you must log into a Controller node:
$ sudo podman exec openstack-manila-share-podman-0 ceph -n client.manila fs subvolume ls cephfs | grep ec99db3c-0077-40b7-b09e-8a110e3f73c1
4.3. Troubleshooting failed verification
If the verification procedures fail, verify that the Ceph key for the openstack.client
user and the Red Hat Ceph Storage monitor IPs or hostnames can be used together to read, write, and delete from the Ceph Storage pools that you created for the Red Hat OpenStack Platform (RHOSP).
Procedure
To shorten the amount of typing you must do in this procedure, log in to a Compute node and create an alias for the
rbd
command:$ alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
Confirm that you can write test data to the pool as a new object:
$ rbd create --size 1024 vms/foo
Confirm that you can see the test data:
$ rbd ls vms | grep foo
Delete the test data:
$ rbd rm vms/foo
If this procedure fails, contact your Ceph Storage administrator for assistance. If this procedure succeeds, but you cannot create Compute (nova) instances, Image service (glance) images, Block Storage (cinder) volumes, or Shared File Systems service (manila) shares, contact Red Hat Support.