Integrating an Overcloud with an Existing Red Hat Ceph Storage Cluster


Red Hat OpenStack Platform 17.0

Configuring an overcloud to use standalone Red Hat Ceph Storage

OpenStack Documentation Team

Abstract

You can use Red Hat OpenStack Platform (RHOSP) director to integrate an overcloud with an existing, standalone Red Hat Ceph Storage cluster.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Integrating an overcloud with Ceph Storage

Red Hat OpenStack Platform director creates a cloud environment called the overcloud. You can use director to configure extra features for an overcloud, such as integration with Red Hat Ceph Storage. You can integrate your overcloud with Ceph Storage clusters created with director or with existing Ceph Storage clusters. The default integration with Ceph configures the Image service (glance), the Block Storage service (cinder), and the Compute service (nova) to use block storage over the Rados Block Device (RBD) protocol. Additional integration options for File and Object storage might also be included.

For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide.

1.1. Deploying the Shared File Systems service with external CephFS

You can deploy the Shared File Systems service (manila) with CephFS by using Red Hat OpenStack Platform (RHOSP) director. You can use the Shared File Systems service with the NFS protocol or the native CephFS protocol.

Important

You cannot use the Shared File Systems service with the CephFS native driver to serve shares to Red Hat OpenShift Container Platform through Manila CSI. Red Hat does not support this type of deployment. For more information, contact Red Hat Support.

The Shared File Systems service with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For more information about CSI workload recommendations, see https://access.redhat.com/articles/6667651.

To use native CephFS shared file systems, clients require access to the Ceph public network. When you integrate an overcloud with an existing Ceph Storage cluster, director does not create an isolated storage network to designate as the Ceph public network. This network is assumed to already exist. Do not provide direct access to the Ceph public network; instead, allow tenants to create a router to connect to the Ceph public network.

NFS-Ganesha gateway

When you use CephFS through the NFS protocol, director deploys the NFS-Ganesha gateway on Controller nodes managed by Pacemaker (PCS). PCS manages cluster availability by using an active-passive configuration.

The NFS-Ganesha gateway is supported with Red Hat Ceph Storage 4.x (Ceph package 14.x) and Red Hat Ceph Storage 5.x (Ceph package 16.x). For information about how to determine the Ceph Storage release installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.

Prerequisites

Before you configure the Shared File Systems service with an external Ceph Storage cluster, complete the following prerequisites:

  • Verify that your external Ceph Storage cluster has an active Metadata Server (MDS):

    $ ceph -s
  • The external Ceph Storage cluster must have a CephFS file system that is supported by CephFS data and metadata pools.

    • Verify the pools in the CephFS file system:

      $ ceph fs ls
    • Note the names of these pools to configure the director parameters, ManilaCephFSDataPoolName and ManilaCephFSMetadataPoolName. For more information about this configuration, see Creating a custom environment file.
  • The external Ceph Storage cluster must have a cephx client name and key for the Shared File Systems service.

    • Verify the keyring:

      $ ceph auth get client.<client name>
      • Replace <client name> with your cephx client name.

1.2. Configuring Ceph Object Store to use external Ceph Object Gateway

Red Hat OpenStack Platform (RHOSP) director supports configuring an external Ceph Object Gateway (RGW) as an Object Store service. To authenticate with the external RGW service, you must configure RGW to verify users and their roles in the Identity service (keystone).

For more information about how to configure an external Ceph Object Gateway, see Configuring the Ceph Object Gateway to use Keystone authentication in the Using Keystone with the Ceph Object Gateway Guide.

Chapter 2. Preparing overcloud nodes

The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see the product documentation for Red Hat Ceph Storage.

2.1. Configuring the existing Red Hat Ceph Storage cluster

To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed.

Procedure

  1. Log in to the external Ceph admin node.
  2. Open an interactive shell to access Ceph commands:

    [user@ceph ~]$ sudo cephadm shell
  3. Create the following RADOS Block Device (RBD) pools in your Ceph Storage cluster, relevant to your environment:

    • Storage for OpenStack Block Storage (cinder):

      $ ceph osd pool create volumes <pgnum>
    • Storage for OpenStack Image Storage (glance):

      $ ceph osd pool create images <pgnum>
    • Storage for instances:

      $ ceph osd pool create vms <pgnum>
    • Storage for OpenStack Block Storage Backup (cinder-backup):

      $ ceph osd pool create backups <pgnum>
  4. If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
  5. Create a client.openstack user in your Ceph Storage cluster with the following capabilities:

    • cap_mgr: allow *
    • cap_mon: profile rbd
    • cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups

      $ ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups'
  6. Note the Ceph client key created for the client.openstack user:

    $ ceph auth list
    ...
    [client.openstack]
    	key = <AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==>
    	caps mgr = "allow *"
    	caps mon = "profile rbd"
    	caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups"
    ...
    • The key value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==, is your Ceph client key.
  7. If your overcloud deploys the Shared File Systems service with CephFS, create the client.manila user in your Ceph Storage cluster. The capabilities required for the client.manila user depend on whether your deployment exposes CephFS shares through the native CephFS protocol or the NFS protocol.

    • If you expose CephFS shares through the native CephFS protocol, the following capabilities are required:

      • cap_mgr: allow rw
      • cap_mon: allow r

        $ ceph auth add client.manila mgr 'allow rw' mon 'allow r'
    • If you expose CephFS shares through the NFS protocol, the following capabilities are required:

      • cap_mgr: allow rw
      • cap_mon: allow r
      • cap_osd: allow rw pool=manila_data

        The specified pool name must be the value set for the ManilaCephFSDataPoolName parameter, which defaults to manila_data.

        $ ceph auth add client.manila  mgr 'allow rw' mon 'allow r' osd 'allow rw pool=manila_data'
  8. Note the manila client name and the key value to use in overcloud deployment templates:

    $ ceph auth get-key client.manila
         <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
  9. Note the file system ID of your Ceph Storage cluster. This value is specified in the fsid field, under the [global] section of the configuration file for your cluster:

    [global]
    fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
    ...
Note

Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file.

Chapter 3. Integrating with an existing Red Hat Ceph Storage cluster

Use the procedures and information in this section to integrate Red Hat OpenStack Platform (RHOSP) with an existing Red Hat Ceph Storage cluster. You can create custom environment files to override and provide values for configuration options within OpenStack components.

3.1. Creating a custom environment file

Director supplies parameters to tripleo-ansible to integrate with an external Red Hat Ceph Storage cluster through the environment file:

  • /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml

If you deploy the Shared File Systems service (manila) with external CephFS, separate environment files supply additional parameters.

  • For native CephFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml.
  • For CephFS through NFS, the environment file is /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml.

To configure integration of an existing Ceph Storage cluster with the overcloud, you must supply the details of your Ceph Storage cluster to director by using a custom environment file. Director invokes these environment files during deployment.

Procedure

  1. Create a custom environment file:

    /home/stack/templates/ceph-config.yaml

  2. Add a parameter_defaults: section to the file:

    parameter_defaults:
  3. Use parameter_defaults to set all of the parameters that you want to override in /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml. You must set the following parameters at a minimum:

    • CephClientKey: The Ceph client key for the client.openstack user in your Ceph Storage cluster. This is the value of key that you retrieved in Configuring the existing Ceph Storage cluster. For example, AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==.
    • CephClusterFSID: The file system ID of your Ceph Storage cluster. This is the value of fsid in your Ceph Storage cluster configuration file, which you retrieved in Configuring the existing Ceph Storage cluster. For example, 4b5c8c0a-ff60-454b-a1b4-9747aa737d19.
    • CephExternalMonHost: A comma-delimited list of the IPs of all MON hosts in your Ceph Storage cluster, for example, 172.16.1.7, 172.16.1.8.

      For example:

      parameter_defaults:
        CephClientKey: <AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==>
        CephClusterFSID: <4b5c8c0a-ff60-454b-a1b4-9747aa737d19>
        CephExternalMonHost: <172.16.1.7, 172.16.1.8, 172.16.1.9>
  4. Optional: You can override the Red Hat OpenStack Platform (RHOSP) client username and the following default pool names to match your Ceph Storage cluster:

    • CephClientUserName: <openstack>
    • NovaRbdPoolName: <vms>
    • CinderRbdPoolName: <volumes>
    • GlanceRbdPoolName: <images>
    • CinderBackupRbdPoolName: <backups>
  5. Optional: If you are deploying the Shared File Systems service with CephFS, you can override the following default data and metadata pool names:

      ManilaCephFSDataPoolName: <manila_data>
      ManilaCephFSMetadataPoolName: <manila_metadata>
    Note

    Ensure that these names match the names of the pools you created.

  6. Set the client key that you created for the Shared File Systems service. You can override the default Ceph client username for that key:

      ManilaCephFSCephFSAuthId: <manila>
      CephManilaClientKey: <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>
    Note

    The default client username ManilaCephFSCephFSAuthId is manila, unless you override it. CephManilaClientKey is always required.

After you create the custom environment file, you must include it when you deploy the overcloud.

Additional resources

3.2. Ceph containers for Red Hat OpenStack Platform with Red Hat Ceph Storage

You must have a Ceph Storage container to configure Red Hat Openstack Platform (RHOSP) to use Red Hat Ceph Storage with NFS Ganesha. You do not require a Ceph Storage container if the external Ceph Storage cluster only provides Block (through RBD), Object (through RGW), or File (through native CephFS) storage.

RHOSP 17.0 requires Red Hat Ceph Storage 5.x (Ceph package 16.x) or later to be compatible with Red Hat Enterprise Linux 9. The Ceph Storage 5.x containers are hosted at registry.redhat.io, a registry that requires authentication. For more information, see Container image preparation parameters.

3.3. Deploying the overcloud

Deploy the overcloud with the environment file that you created.

Procedure

  • The creation of the overcloud requires additional arguments for the openstack overcloud deploy command:

    $ openstack overcloud deploy --templates \
      -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \
      -e /home/stack/templates/ceph-config.yaml \
      -e --ntp-server pool.ntp.org \
      ...

    This example command uses the following options:

  • --templates - Creates the overcloud from the default heat template collection, /usr/share/openstack-tripleo-heat-templates/.

    • -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml - Sets the director to integrate an existing Ceph Storage cluster to the overcloud.
    • -e /home/stack/templates/ceph-config.yaml - Adds a custom environment file to override the defaults set by -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml.
    • --ntp-server pool.ntp.org - Sets the NTP server.

3.3.1. Adding environment files for the Shared File Systems service with CephFS

If you deploy an overcloud that uses the Shared File Systems service (manila) with CephFS, you must add additional environment files.

Procedure

  1. Create and add additional environment files:

    • If you deploy an overcloud that uses the native CephFS back-end driver, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml.
    • If you deploy an overcloud that uses CephFS through NFS, add /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml.

      Red Hat recommends that you deploy the Ceph-through-NFS driver with an isolated StorageNFS network where shares are exported. You must deploy the isolated network to overcloud Controller nodes. To enable this deployment, director includes the following file and role:

      • An example custom network configuration file that includes the StorageNFS network (/usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml). Review and customize this file as necessary.
      • A ControllerStorageNFS role.
  2. Modify the openstack overcloud deploy command depending on the CephFS back end that you use.

    • For native CephFS:

       $ openstack overcloud deploy --templates \
         -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \
         -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml \
         -e /home/stack/templates/ceph-config.yaml \
         -e --ntp-server pool.ntp.org
         ...
    • For CephFS through NFS:

       $ openstack overcloud deploy --templates \
           -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml \
           -r /home/stack/custom_roles.yaml \
           -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \
           -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml \
           -e /home/stack/templates/ceph-config.yaml \
           -e --ntp-server pool.ntp.org
           ...
      Note

      The custom ceph-config.yaml environment file overrides parameters in the external-ceph.yaml file and either the manila-cephfsnative-config.yaml file or the manila-cephfsganesha-config.yaml file. Therefore, include the custom ceph-config.yaml environment file in the deployment command after external-ceph.yaml and either manila-cephfsnative-config.yaml or manila-cephfsganesha-config.yaml.

      Example environment file

      parameter_defaults:
          CinderEnableIscsiBackend: false
          CinderEnableRbdBackend: true
          CinderEnableNfsBackend: false
          NovaEnableRbdBackend: true
          GlanceBackend: rbd
          CinderRbdPoolName: "volumes"
          NovaRbdPoolName: "vms"
          GlanceRbdPoolName: "images"
          CinderBackupRbdPoolName: "backups"
          CephClusterFSID: <cluster_ID>
          CephExternalMonHost: <IP_address>,<IP_address>,<IP_address>
          CephClientKey: "<client_key>"
          CephClientUserName: "openstack"
          ManilaCephFSDataPoolName: manila_data
          ManilaCephFSMetadataPoolName: manila_metadata
          ManilaCephFSCephFSAuthId: 'manila'
          CephManilaClientKey: '<client_key>'
          ExtraConfig:

      • Replace <cluster_ID>, <IP_address>, and <client_key> with values that are suitable for your environment.

Additional resources

3.3.2. Adding an additional environment file for external Ceph Object Gateway (RGW) for Object storage

If you deploy an overcloud that uses an already existing RGW service for Object storage, you must add an additional environment file.

Procedure

  1. Add the following parameter_defaults to a custom environment file, for example, swift-external-params.yaml, and adjust the values to suit your deployment:

    parameter_defaults:
       ExternalSwiftPublicUrl: 'http://<Public RGW endpoint or loadbalancer>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalSwiftInternalUrl: 'http://<Internal RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalSwiftAdminUrl: 'http://<Admin RGW endpoint>:8080/swift/v1/AUTH_%(project_id)s'
       ExternalSwiftUserTenant: 'service'
       SwiftPassword: 'choose_a_random_password'
    Note

    The example code snippet contains parameter values that might differ from values that you use in your environment:

    • The default port where the remote RGW instance listens is 8080. The port might be different depending on how the external RGW is configured.
    • The swift user created in the overcloud uses the password defined by the SwiftPassword parameter. You must configure the external RGW instance to use the same password to authenticate with the Identity service by using the rgw_keystone_admin_password.
  2. Add the following code to the Ceph config file to configure RGW to use the Identity service. Replace the variable values to suit your environment:

        rgw_keystone_api_version = 3
        rgw_keystone_url = http://<public Keystone endpoint>:5000/
        rgw_keystone_accepted_roles = member, Member, admin
        rgw_keystone_accepted_admin_roles = ResellerAdmin, swiftoperator
        rgw_keystone_admin_domain = default
        rgw_keystone_admin_project = service
        rgw_keystone_admin_user = swift
        rgw_keystone_admin_password = <password_as_defined_in_the_environment_parameters>
        rgw_keystone_implicit_tenants = true
        rgw_keystone_revocation_interval = 0
        rgw_s3_auth_use_keystone = true
        rgw_swift_versioning_enabled = true
        rgw_swift_account_in_url = true
        rgw_max_attr_name_len = 128
        rgw_max_attrs_num_in_req = 90
        rgw_max_attr_size = 256
        rgw_keystone_verify_ssl = false
    Note

    Director creates the following roles and users in the Identity service by default:

    • rgw_keystone_accepted_admin_roles: ResellerAdmin, swiftoperator
    • rgw_keystone_admin_domain: default
    • rgw_keystone_admin_project: service
    • rgw_keystone_admin_user: swift
  3. Deploy the overcloud with the additional environment files with any other environment files that are relevant to your deployment:

    openstack overcloud deploy --templates \
    -e <your_environment_files>
    -e /usr/share/openstack-tripleo-heat-templates/environments/swift-external.yaml
    -e swift-external-params.yaml

Chapter 4. Verifying external Ceph Storage cluster integration

After you deploy the overcloud, confirm that Red Hat OpenStack Platform (RHOSP) services can write to the Red Hat Ceph Storage cluster.

4.1. Gathering IDs

To verify that you integrated a Red Hat Ceph Storage cluster, you must first create an image, a Compute instance, a Block Storage volume, and a file share and gather their respective IDs.

Procedure

  1. Create an image by using the Image service (glance). For more information about how to create an image, see Importing an image in Creating and Managing Images.
  2. Record the image ID for later use.
  3. Create a Compute (nova) instance. For more information about how to create an instance, see Creating an instance in the Creating and Managing Instances guide.
  4. Record the instance ID for later use.
  5. Create a Block Storage (cinder) volume. FFor more information about how to create a Block Storage volume, see Creating Block Storage volumes in the Storage Guide.
  6. Record the volume ID for later use.
  7. Create a file share by using the Shared File Systems service (manila).
  8. List the export path of the share and record the UUID in the suffix for later use.

For more information about how to create file shares and list the export path of a share, see Performing operations with the Shared File Systems service (manila) in the Storage Guide.

4.2. Verifying the Red Hat Ceph Storage cluster

When you configure an external Red Hat Ceph Storage cluster, you create pools and a client.openstack user to access those pools. After you deploy the overcloud, you can use the file that contains the credentials of the client.openstack user to list the contents of Red Hat OpenStack Platform (RHOSP) pools.

List the contents of the pools and confirm that the IDs of the Image service (glance) image, the Compute (nova) instance, the Block Storage (cinder) volume, and the Shared File Systems service (manila) file share exist on the Ceph Storage cluster.

Procedure

  1. Log in to the undercloud as the stack user and source the stackrc credentials file:

    $ source ~/stackrc
  2. List the available servers to retrieve the IP addresses of nodes on the system:

    $ metalsmith list
    
    +---------------+----------------+---------------+
    | ID | Name | Status | Networks | Image | Flavor |
    +---------------+----------------+---------------+
    | d5a621bd-d109-41ae-a381-a42414397802 | compute-0 | ACTIVE | ctlplane=192.168.24.31 | overcloud-full | compute |
    | 496ab196-d6cb-447d-a118-5bafc5166cf2 | controller-0 | ACTIVE | ctlplane=192.168.24.37 | overcloud-full | controller |
    | c01e730d-62f2-426a-a964-b31448f250b3 | controller-2 | ACTIVE | ctlplane=192.168.24.55 | overcloud-full | controller |
    | 36df59b3-66f3-452e-9aec-b7e7f7c54b86 | controller-1 | ACTIVE | ctlplane=192.168.24.39 | overcloud-full | controller |
    | f8f00497-246d-4e40-8a6a-b5a60fa66483 | compute-1 | ACTIVE | ctlplane=192.168.24.10 | overcloud-full | compute |
  3. Use SSH to log in to any Compute node:

    $ ssh tripleo-admin@192.168.24.31
  4. Confirm that the files ceph.conf and /ceph.client.openstack.keyring exist in the CephConfigPath provided by director. This path is /var/lib/tripleo-config/ceph by default but an override might exist.

    [tripleo-admin@compute-0 ~]$ sudo ls -l /var/lib/tripleo-config/ceph/ceph.conf
    
    -rw-r--r--. 1 root root 1170 Sep 29 23:25 /var/lib/tripleo-config/ceph/ceph.conf
    
    [tripleo-admin@compute-0 ~]$ sudo ls -l /var/lib/tripleo-config/ceph/ceph.client.openstack.keyring
    
    -rw-------. 1 ceph ceph 253 Sep 29 23:25 /var/lib/tripleo-config/ceph/ceph.client.openstack.keyring
  5. Enter the following command to force the nova_compute container to use the rbd command to list the contents of the appropriate pool.

    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms

    The pool name must match the pool names of the images, VMs, volumes, and shares that you created when you configured the Ceph Storage cluster. The IDs of the image, Compute instance, volume, and share must match the IDs that you recorded in Gathering IDs.

    Note

    The example command is prefixed with podman exec nova_compute because /usr/bin/rbd, which is provided by the ceph-common package, is not installed on overcloud nodes by default. However, it is available in the nova_compute container. The command lists block device images. For more information about listing block device images, see Listing the block device images in the Red Hat Ceph Storage Block Device Guide.

    The following examples show how to confirm whether an ID for each service is present for each pool by using the IDs from Gathering IDs.

    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls images | grep 4485d4c0-24c3-42ec-a158-4d3950fa020b
    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls vms | grep 64bcb731-e7a4-4dd5-a807-ee26c669482f
    $ sudo podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack ls volumes | grep aeac15e8-b67f-454f-9486-46b3d75daff4
  6. To verify the existence of the Shared File Systems service share, you must log into a Controller node:

    $ sudo podman exec openstack-manila-share-podman-0 ceph -n client.manila fs subvolume ls cephfs | grep ec99db3c-0077-40b7-b09e-8a110e3f73c1

4.3. Troubleshooting failed verification

If the verification procedures fail, verify that the Ceph key for the openstack.client user and the Red Hat Ceph Storage monitor IPs or hostnames can be used together to read, write, and delete from the Ceph Storage pools that you created for the Red Hat OpenStack Platform (RHOSP).

Procedure

  1. To shorten the amount of typing you must do in this procedure, log in to a Compute node and create an alias for the rbd command:

    $ alias rbd="podman exec nova_compute /usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
  2. Confirm that you can write test data to the pool as a new object:

    $ rbd create --size 1024 vms/foo
  3. Confirm that you can see the test data:

    $ rbd ls vms | grep foo
  4. Delete the test data:

    $ rbd rm vms/foo
Note

If this procedure fails, contact your Ceph Storage administrator for assistance. If this procedure succeeds, but you cannot create Compute (nova) instances, Image service (glance) images, Block Storage (cinder) volumes, or Shared File Systems service (manila) shares, contact Red Hat Support.

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.