Configuring persistent storage


Red Hat OpenStack Services on OpenShift 18.0

Configuring storage services for Red Hat OpenStack Services on OpenShift

OpenStack Documentation Team

Abstract

Configure the services for block, image, object, and file storage in your Red Hat OpenStack Services on OpenShift deployment.

Providing feedback on Red Hat documentation

We appreciate your input on our documentation. Tell us how we can make it better.

Providing documentation feedback in Jira

Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.

To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.

  1. Click the following link to open a Create Issue page: Create Issue
  2. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
  3. Click Create.

Chapter 1. Configuring persistent storage

When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), you can configure your deployment to use Red Hat Ceph Storage as the back end for storage and you can configure RHOSO storage services for block, image, object, and file storage.

Note

Red Hat OpenStack Services on OpenShift (RHOSO) supports integration with Red Hat Ceph Storage 8 with the following known issue:

  • RHCEPH-10845 - [BZ#2351825] RGW tempest failures with RHCS 8 and RHOSO 18

Due to this known issue, the Red Hat Ceph Storage Object Gateway (RGW) is not supported for use with Red Hat Ceph Storage 8. For more information about this known issue, consult the provided link before attempting to integrate with Red Hat Ceph Storage 8.

You can integrate an external Red Hat Ceph Storage cluster with the Compute service (nova) and a combination of one or more RHOSO storage services, or you can create a hyperconverged infrastructure (HCI) environment. RHOSO supports Red Hat Ceph Storage 7 and 8. For information about creating a hyperconverged infrastructure (HCI) environment, see Deploying a hyperconverged infrastructure environment.

Note

Red Hat OpenShift Data Foundation (ODF) can be used in external mode to integrate with Red Hat Ceph Storage. The use of ODF in internal mode is not supported. For more information on deploying ODF in external mode, see Deploying OpenShift Data Foundation in external mode.

RHOSO recognizes two types of storage - ephemeral and persistent:

  • Ephemeral storage is associated with a specific Compute instance. When that instance is terminated, so is the associated ephemeral storage. This type of storage is useful for runtime requirements, such as storing the operating system of an instance.
  • Persistent storage is designed to survive (persist) independent of any running instance. This storage is used for any data that needs to be reused, either by different instances or beyond the life of a specific instance.

RHOSO storage services correspond with the following persistent storage types:

  • Block Storage service (cinder): Volumes
  • Image service (glance): Images
  • Object Storage service (swift): Objects
  • Shared File Systems service (manila): Shares

All persistent storage services store data in a storage back end. Red Hat Ceph Storage can serve as a back end for all four services, and the features and functionality of OpenStack services are optimized when you use Red Hat Ceph Storage.

Storage solutions

RHOSO supports the following storage solutions:

  • Configure the Block Storage service with a Ceph RBD back end, iSCSI, FC, or NVMe-TCP storage protocols, or a generic NFS back end.
  • Configure the Image service with a Ceph RBD, Block Storage, Object Storage, or NFS back end.
  • Configure the Object Storage service to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.
  • Configure the Shared File Systems service with a native CephFS, Ceph-NFS, or alternative back end, such as NetApp or Pure Storage.

For information about planning the storage solution and related requirements for your RHOSO deployment, for example, networking and security, see Planning storage and shared file systems in Planning your deployment.

To promote the use of best practices, Red Hat has a certification process for OpenStack back ends. For improved supportability and interoperability, ensure that your storage back end is certified for RHOSO. You can check certification status in the Red Hat Ecosystem Catalog. Ceph RBD is certified as a back end in all RHOSO releases.

Note

Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7 and 8. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using Red Hat Ceph Storage 8, adjust the configuration examples accordingly.

Chapter 2. Mounting external files to provide configuration data

Some deployment scenarios require access to external data for configuration or authentication purposes. RHOSO provides the extraMounts parameter to allow access to this external information. This parameter mounts the designated external file for use by the RHOSO deployment. Deployment scenarios that require external information of this type can include:

  • A component needs deployment-specific configuration and credential files for the storage back-end to exist in a specific location in the filesystem. For example, the Red Hat Ceph Storage cluster configuration and keyring files are required by the Block Storage (cinder), Image (glance) and Compute (nova) services. These configuration and keyring files must be distributed to these services using the extraMounts parameter.
  • A node requires access to an external NFS share to use as a temporary image storage location when the allocated node disk space is fully consumed. You use the extraMounts parameter to configure this access. For example, the Block Storage service using an external share to perform conversion.
  • A storage back-end drive must run on a persistent filesystem to preserve stored data between reboots. You must use the extraMounts parameter to configure this runtime location.

The extraMounts parameter can be defined at the following levels:

  • Service - A Red Hat OpenStack Services on OpenShift (RHOSO) service such as Glance, Cinder, or Manila.
  • Component - A component of a service such as GlanceAPI, CinderAPI, CinderScheduler, ManilaShare, CinderBackup.
  • Instance - An individual instance of a particular component. For example, your deployment could have two instances of the component ManilaShare called share1 and share2. An Instance level propagation represents the Pod associated to an instance that is part of the same Component type.

The propagation field is used to describe how the definition is applied. If the propagation field is not used, definitions propagate to every level below the level at which it is defined:

  • Service level definitions propagate to Component and Instance levels.
  • Component level definitions propagate to the Instance level.

The following is the general structure of an extraMounts definition:

extraMounts:
  - name: <extramount-name> 
1

    region: <openstack-region> 
2

    extraVol:
      - propagation: 
3

        - <location>
        extraVolType: <Ceph | Nfs | Undefined> 
4

        volumes: 
5

        - <pod-volume-structure>
        mounts: 
6

        - <pod-mount-structure>
Copy to Clipboard
1
The name field is a string that names the extraMounts definition. This is for organizational purposes and cannot be referenced from other parts of the manifest. This is an optional attribute.
2
The region field is a string that defines the RHOSO region of the extraMounts definition. This is an optional attribute.
3
The propagation field describes how the definition is applied. If the propagation field is not used, definitions propagate to every level below the level at which it is defined. This is an optional attribute.
4
The extraVolType field is a string that assists the administrator in categorizing or labeling the group of mounts that belong to the extraVol entry of the list. There are no defined values for this parameter but the values Ceph, Nfs, and Undefined are common. This is an optional attribute.
5
The volumes field is a list that defines Red Hat OpenShift volume sources. This field has the same structure as the volumes section in a Pod. The structure is dependent on the type of volume being defined. The name defined in this section is used as a reference in the mounts section.
6
The mounts field is a list of mountpoints that represent the path where the volumeSource should be mounted in the Pod. The name of a volume from the volumes section is used as a reference as well as the path where it should be mounted. This attribute has the same structure as the volumeMounts attribute for a Pod.

2.1. Mounting external files using the extraMounts attribute

Procedure

  1. Open your OpenStackControlPlane custom resource (CR) file, openstack_control_plane.yaml, on your workstation.
  2. Add the extraMounts attribute to the OpenStackControlPlane CR service definition.

    The following example demonstrates adding the extraMounts attribute:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            extraVolType: Ceph
    Copy to Clipboard
  3. Add the propagation field to specify where in the service definition the extraMount attribute applies.

    The following example adds the propagation field to the previous example:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      glance:
      ...
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation: 
    1
    
              - Glance
              extraVolType: Ceph
    Copy to Clipboard
    1
    The propagation field can have one of the following values:
    • Service level propagations:

      • Glance
      • Cinder
      • Manila
      • Horizon
      • Neutron
    • Component level propagations:

      • CinderAPI
      • CinderScheduler
      • CinderVolume
      • CinderBackup
      • GlanceAPI
      • ManilaAPI
      • ManilaScheduler
      • ManilaShare
      • NeutronAPI
    • Back-end propagation:

      • Any back-end in the CinderVolume, ManilaShare , or GlanceAPI maps.
  4. Define the volume sources:

    The following example demonstrates adding the volumes field to the previous example to provide a Red Hat Ceph Storage secret to the Image service (glance):

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            extraVolType: Ceph
            volumes: 
    1
    
              - name: ceph
                secret:
                  secretName: ceph-conf-files
    Copy to Clipboard
    1
    The volumes field with the Red Hat Ceph Storage secret name.
  5. Define where the different volumes are mounted within the Pod.

    The following example demonstrates adding the mounts field to the previous example to provide the location and name of the file that contains the Red Hat Ceph Storage secret:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            extraVolType: Ceph
            volumes:
              - name: ceph
                secret:
                  secretName: ceph-conf-files
              mounts: 
    1
    
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
    Copy to Clipboard
    1
    The mounts field with the location of the secrets file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    Tip

    Append the -w option to the end of the oc get command to track deployment progress.

    The OpenStackControlPlane resources are created when the status is "Setup complete".

  8. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack
    Copy to Clipboard

    The control plane is deployed when all the pods are either completed or running.

2.2. Mounting external files configuration examples

The following configuration examples demonstrate how the extraMounts attribute is used to mount external files. The extraMounts attribute is defined at either the top level custom resource (spec) or the service definition.

Dashboard service (horizon)

This configuration example demonstrates using an external file to provide configuration to the Dashboard service.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
  horizon:
    enabled: true
    template:
      customServiceConfig: '# add your customization here'
      extraMounts:
      - extraVol:
        - extraVolType: HorizonSettings
          mounts:
          - mountPath: /etc/openstack-dashboard/local_settings.d/_66_help_link.py
              name: horizon-config
              readOnly: true
              subPath: _66_help_link.py
          volumes:
            - name: horizon-config
              configMap:
                name: horizon-config
Copy to Clipboard

Red Hat Ceph Storage

This configuration example defines the services that require access to the Red Hat Ceph Storage secret.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
  extraMounts:
    - name: v1
      region: r1
      extraVol:
        - propagation:
          - CinderVolume
          - CinderBackup
          - GlanceAPI
          - ManilaShare
          extraVolType: Ceph
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files
          mounts:
          - name: ceph
            mountPath: "/etc/ceph"
            readOnly: true
Copy to Clipboard

Shared File Systems service (manila)

This configuration example provides external configuration files to the Shared File Systems service so that it can connect to a Red Hat Ceph Storage back end.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
apiVersion: core.openstack.org/v1beta1
spec:
  manila:
    template:
    ManilaShares:
      share1:
      ...
  extraMounts:
    - name: v1
      region: r1
      extraVol:
        - propagation:
          - share1
          extraVolType: Ceph
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files
          mounts:
          - name: ceph
            mountPath: "/etc/ceph"
            readOnly: true
Copy to Clipboard

Image service (glance)

This configuration example connects three glanceAPI instances to a different Red Hat Ceph Storage back end. The instances; api0, api1, and api2; are connected to three different Red Hat Ceph Storage clusters that are named ceph0, ceph1, and ceph2.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
  extraMounts:
    - name: api0
      region: r1
      extraVol:
        - propagation:
          - api0
          volumes:
            - name: ceph0
              secret:
                secretName: <secret_name>
           mounts:
             - name: ceph0
               mountPath: "/etc/ceph"
               readOnly: true
    - name: api1
      region: r1
      extraVol:
        - propagation:
          - api1
          volumes:
            - name: ceph1
              secret:
                secretName: <secret_name>
           mounts:
             - name: ceph1
               mountPath: "/etc/ceph"
               readOnly: true
   - name: api2
     region: r1
     extraVol:
       - propagation:
         - api2
         volumes:
           - name: ceph2
             secret:
               secretName: <secret_name>
          mounts:
            - name: ceph2
              mountPath: "/etc/ceph"
              readOnly: true
Copy to Clipboard

Chapter 3. Integrating Red Hat Ceph Storage

You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the following services to a Red Hat Ceph Storage cluster:

  • Block Storage service (cinder)
  • Image service (glance)
  • Object Storage service (swift)
  • Compute service (nova)
  • Shared File Systems service (manila)

To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks:

  1. Verify that Red Hat Ceph Storage is deployed and all the required services are running.
  2. Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster.
  3. Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster.
  4. Obtain the Ceph File System Identifier.
  5. Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end.
  6. Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end.

Prerequisites

  • Access to a Red Hat Ceph Storage cluster.
  • The RHOSO control plane is installed on an operational RHOSO cluster.

3.1. Creating Red Hat Ceph Storage pools

Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster.

Note

Run the commands in this procedure from the Ceph node.

Procedure

  1. Enter the cephadm container client:

    $ sudo cephadm shell
    Copy to Clipboard
  2. Create pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images):

    $ for P in vms volumes images; do
       ceph osd pool create $P;
       ceph osd pool application enable $P rbd;
    done
    Copy to Clipboard
    Note

    When you create the pools, set the appropriate placement group (PG) number, as described in Placement Groups in the Red Hat Ceph Storage Storage Strategies Guide.

  3. Optional: Create the cephfs volume if the Shared File Systems service (manila) is enabled in the control plane. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster:

    $ ceph fs volume create cephfs
    Copy to Clipboard
  4. Optional: Deploy an NFS service on the Red Hat Ceph Storage cluster to use CephFS with NFS:

    $ ceph nfs cluster create cephfs \
    --ingress --virtual-ip=<vip> \
    --ingress-mode=haproxy-protocol
    Copy to Clipboard
    • Replace <vip> with the IP address assigned to the NFS service. The NFS service should be isolated on a network that can be shared with all Red Hat OpenStack users. See NFS cluster and export management, for more information about customizing the NFS service.

      Important

      When you deploy an NFS service for the Shared File Systems service, do not select a custom port to expose NFS. Only the default NFS port of 2049 is supported. You must enable the Red Hat Ceph Storage ingress service and set the ingress-mode to haproxy-protocol. Otherwise, you cannot use IP-based access rules with the Shared File Systems service. For security in production environments, do not provide access to 0.0.0.0/0 on shares to mount them on client machines.

  5. Create a cephx key for RHOSO to use to access pools:

    $ ceph auth add client.openstack \
         mgr 'allow *' \
            mon 'profile rbd' \
            osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images'
    Copy to Clipboard
    Important

    If the Shared File Systems service is enabled in the control plane, replace osd caps with the following:

    $ ceph auth add client.openstack \
         mgr 'allow *' \
            mon 'profile rbd' \
            osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.data'
    Copy to Clipboard
  6. Export the cephx key:

    $ ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyring
    Copy to Clipboard
  7. Export the configuration file:

    $ ceph config generate-minimal-conf > /etc/ceph/ceph.conf
    Copy to Clipboard

3.2. Creating a Red Hat Ceph Storage secret

Create a secret so that services can access the Red Hat Ceph Storage cluster.

Procedure

  1. Transfer the cephx key and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in the openstack namespace.
  2. Base64 encode these files and store them in KEY and CONF environment variables:

    $ KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0)
    $ CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)
    Copy to Clipboard
  3. Create a YAML file to create the Secret resource.
  4. Using the environment variables, add the Secret configuration to the YAML file:

    apiVersion: v1
    data:
      ceph.client.openstack.keyring: $KEY
      ceph.conf: $CONF
    kind: Secret
    metadata:
      name: ceph-conf-files
      namespace: openstack
    type: Opaque
    Copy to Clipboard
  5. Save the YAML file.
  6. Create the Secret resource:

    $ oc create -f <secret_configuration_file>
    Copy to Clipboard
    • Replace <secret_configuration_file> with the name of the YAML file you created.
Note

The examples in this section use openstack as the name of the Red Hat Ceph Storage user. The file name in the Secret resource must match this user name.

For example, if the file name used for the username openstack2 is /etc/ceph/ceph.client.openstack2.keyring, then the secret data line should be ceph.client.openstack2.keyring: $KEY.

3.3. Obtaining the Red Hat Ceph Storage File System Identifier

The Red Hat Ceph Storage File System Identifier (FSID) is a unique identifier for the cluster. The FSID is used in configuration and verification of cluster interoperability with RHOSO.

Procedure

  • Extract the FSID from the Red Hat Ceph Storage secret:

    $ FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')

3.4. Configuring the control plane to use the Red Hat Ceph Storage cluster

You must configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster. Configuration includes the following tasks:

  1. Confirming the Red Hat Ceph Storage cluster and the associated services have the correct network configuration.
  2. Configuring the control plane to use the Red Hat Ceph Storage secret.
  3. Configuring the Image service (glance) to use the Red Hat Ceph Storage cluster.
  4. Configuring the Block Storage service (cinder) to use the Red Hat Ceph Storage cluster.
  5. Optional: Configuring the Shared File Systems service (manila) to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster.
Note

This example does not include configuring Block Storage backup service (cinder-backup) with Red Hat Ceph Storage.

Procedure

  1. Check the storage interface defined in your NodeNetworkConfigurationPolicy (nncp) custom resource to confirm that it has the same network configuration as the public_network of the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through the Storage network. The Storage network should have the same network configuration as the public_network of the Red Hat Ceph Storage cluster.

    It is not necessary for RHOSO to access the cluster_network of the Red Hat Ceph Storage cluster.

    Note

    If it does not impact workload performance, the Storage network can be different from the external Red Hat Ceph Storage cluster public_network using routed (L3) connectivity as long as the appropriate routes are added to the Storage network to reach the external Red Hat Ceph Storage cluster public_network.

  2. Check the networkAttachments for the default Image service instance in the OpenStackControlPlane CR to confirm that the default Image service is configured to access the Storage network:

    glance:
        enabled: true
        template:
          databaseInstance: openstack
          storage:
            storageRequest: 10G
          glanceAPIs:
            default
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              networkAttachments:
              - storage
    Copy to Clipboard
  3. Confirm the Block Storage service is configured to access the Storage network through MetalLB.
  4. Optional: Confirm the Shared File Systems service is configured to access the Storage network through ManilaShare.
  5. Confirm the Compute service (nova) is configured to access the Storage network.
  6. Confirm the Red Hat Ceph Storage configuration file, /etc/ceph/ceph.conf, contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within the Storage network IP address range.
  7. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  8. Add the extraMounts parameter to define the services that require access to the Red Hat Ceph Storage secret.

    The following is an example of using the extraMounts parameter for this purpose. Only include ManilaShare in the propagation list if you are using the Shared File Systems service (manila):

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - CinderVolume
              - GlanceAPI
              - ManilaShare
              extraVolType: Ceph
              volumes:
              - name: ceph
                projected:
                  sources:
                  - secret:
                      name: <ceph-conf-files>
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
    Copy to Clipboard
  9. Add the customServiceConfig parameter to the glance template to configure the Image service to use the Red Hat Ceph Storage cluster:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = default_backend:rbd
            [glance_store]
            default_backend = default_backend
            [default_backend]
            rbd_store_ceph_conf = /etc/ceph/ceph.conf
            store_description = "RBD backend"
            rbd_store_pool = images
            rbd_store_user = openstack
          databaseInstance: openstack
          databaseAccount: glance
          secret: osp-secret
          storage:
            storageRequest: 10G
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - GlanceAPI
              extraVolType: Ceph
              volumes:
              - name: ceph
                secret:
                  secretName: ceph-conf-files
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
    Copy to Clipboard

    When you use Red Hat Ceph Storage as a back end for the Image service, image-conversion is enabled by default. For more information, see Planning storage and shared file systems in Planning your deployment.

  10. Add the customServiceConfig parameter to the cinder template to configure the Block Storage service to use the Red Hat Ceph Storage cluster. For information about using Block Storage backups, see Configuring the Block Storage backup service.

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        ...
      cinder:
        template:
          cinderVolumes:
            ceph:
              customServiceConfig: |
                [DEFAULT]
                enabled_backends=ceph
                [ceph]
                volume_backend_name=ceph
                volume_driver=cinder.volume.drivers.rbd.RBDDriver
                rbd_ceph_conf=/etc/ceph/ceph.conf
                rbd_user=openstack
                rbd_pool=volumes
                rbd_flatten_volume_from_snapshot=False
                rbd_secret_uuid=$FSID 
    1
    Copy to Clipboard
    1
    Replace with the actual FSID. The FSID itself does not need to be considered secret. For more information, see Obtaining the Red Hat Ceph Storage FSID.
  11. Optional: Add the customServiceConfig parameter to the manila template to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila).

    The following example exposes native CephFS:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
      ...
      manila:
        template:
          manilaAPI:
            customServiceConfig: |
              [DEFAULT]
              enabled_share_protocols=cephfs
          manilaShares:
            share1:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfs
                [cephfs]
                driver_handles_share_servers=False
                share_backend_name=cephfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_conf_path=/etc/ceph/ceph.conf
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_volume_mode=0755
                cephfs_protocol_helper_type=CEPHFS
    Copy to Clipboard

    The following example exposes CephFS with NFS:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
      ...
      manila:
        template:
          manilaAPI:
            customServiceConfig: |
              [DEFAULT]
              enabled_share_protocols=nfs
          manilaShares:
            share1:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfsnfs
                [cephfsnfs]
                driver_handles_share_servers=False
                share_backend_name=cephfsnfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_conf_path=/etc/ceph/ceph.conf
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_volume_mode=0755
                cephfs_protocol_helper_type=NFS
                cephfs_nfs_cluster_id=cephfs
    Copy to Clipboard
  12. Apply the updates to the OpenStackControlPlane CR:

    $ oc apply -f openstack_control_plane.yaml
    Copy to Clipboard

3.5. Configuring the data plane to use the Red Hat Ceph Storage cluster

Configure the data plane to use the Red Hat Ceph Storage cluster.

Procedure

  1. Create a ConfigMap with additional content for the Compute service (nova) configuration file /etc/nova/nova.conf.d/ inside the nova_compute container. This additional content directs the Compute service to use Red Hat Ceph Storage RBD.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ceph-nova
    data:
     03-ceph-nova.conf: | 
    1
    
      [libvirt]
      images_type=rbd
      images_rbd_pool=vms
      images_rbd_ceph_conf=/etc/ceph/ceph.conf
      images_rbd_glance_store_name=default_backend
      images_rbd_glance_copy_poll_interval=15
      images_rbd_glance_copy_timeout=600
      rbd_user=openstack
      rbd_secret_uuid=$FSID 
    2
    Copy to Clipboard
    1
    This file name must follow the naming convention of ##-<name>-nova.conf. Files are evaluated by the Compute service alphabetically. A filename that starts with 01 will be evaluated by the Compute service before a filename that starts with 02. When the same configuration option occurs in multiple files, the last one read wins.
    2
    The $FSID value should contain the actual FSID as described in the Obtaining the Ceph FSID section. The FSID itself does not need to be considered secret.
  2. Create a custom version of the default nova service to use the new ConfigMap, which in this case is called ceph-nova.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: nova-custom-ceph 
    1
    
    spec:
      caCerts: combined-ca-bundle
      edpmServiceType: nova
      dataSources:
       - configMapRef:
           name: ceph-nova
       - secretRef:
           name: nova-cell1-compute-config
       - secretRef:
           name: nova-migration-ssh-key
      playbook: osp.edpm.nova
    Copy to Clipboard
    1
    The custom service is named nova-custom-ceph. It cannot be named nova because nova is an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation.
  3. Apply the ConfigMap and custom service changes:

    $ oc create -f ceph-nova.yaml
    Copy to Clipboard
  4. Update the OpenStackDataPlaneNodeSet services list to add the extraMounts parameter to define access to the Ceph Storage secret and modify the services list. In the services list, replace the nova service with the new custom service (in this case called nova-custom-ceph).

    Note

    The following OpenStackDataPlaneNodeSet CR representation is an example and may not list all of the services in your environment. For a default list of services in your environment, use the following command:

    oc get -n openstack crd/openstackdataplanenodesets.dataplane.openstack.org -o yaml |yq -r '.spec.versions.[].schema.openAPIV3Schema.properties.spec.properties.services.default
    Copy to Clipboard

    For more information, see Data plane services.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    spec:
      ...
      roles:
        edpm-compute:
          ...
          services:
            - configure-network
            - validate-network
            - install-os
            - configure-os
            - run-os
            - ceph-client
            - ovn
            - libvirt
            - nova-custom-ceph
            - telemetry
    
      nodeTemplate:
        extraMounts:
        - extraVolType: Ceph
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files
          mounts:
          - name: ceph
            mountPath: "/etc/ceph"
            readOnly: true
    Copy to Clipboard
    Note

    You must add the ceph-client service before you add the ovn, libvirt, and nova-custom-ceph services. The ceph-client service configures data plane nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files.

  5. Save the changes to the services list.
  6. Create an OpenStackDataPlaneDeployment CR:

    $ oc create -f <dataplanedeployment_cr_file>
    Copy to Clipboard
    • Replace <dataplanedeployment_cr_file> with the name of your file.

Result

The nova-custom-ceph service Ansible job copies overrides from the ConfigMaps to the Compute service hosts. The Ansible job also uses virsh secret-* commands so the libvirt service retrieves the cephx secret by FSID.

  • Run the following command on a data plane node after the job completes to confirm the job results:

    $ podman exec libvirt_virtsecretd virsh secret-get-value $FSID
    Copy to Clipboard

3.6. Configuring an external Ceph Object Gateway back end

You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end, by completing the following high-level tasks:

  1. Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
  2. Deploy and configure a RGW service to handle object storage requests.

You use the openstack client tool to configure the Object Storage service.

3.6.1. Configuring RGW authentication

You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.

Prerequisites

  • You have deployed an operational OpenStack control plane.

Procedure

  1. Create the Object Storage service on the control plane:

    $ openstack service create --name swift --description "OpenStack Object Storage" object-store
    Copy to Clipboard
  2. Create a user called swift:

    $ openstack user create --project service --password <swift_password> swift
    Copy to Clipboard
    • Replace <swift_password> with the password to assign to the swift user.
  3. Create roles for the swift user:

    $ openstack role create swiftoperator
    $ openstack role create ResellerAdmin
    Copy to Clipboard
  4. Add the swift user to system roles:

    $ openstack role add --user swift --project service member
    $ openstack role add --user swift --project service admin
    Copy to Clipboard
  5. Export the RGW endpoint IP addresses to variables and create control plane endpoints:

    $ export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage>
    $ export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external>
    $ openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s;
    $ openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;
    Copy to Clipboard
    • Replace <rgw_endpoint_ip_address_storage> with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW.
    • Replace <rgw_endpoint_ip_address_external> with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW.

      Note

      Both endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by haproxy and keepalived, used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service.

  6. Add the swiftoperator role to the control plane admin group:

    $ openstack role add --project admin --user admin swiftoperator
    Copy to Clipboard

3.6.2. Configuring and deploying the RGW service

Configure and deploy a RGW service to handle object storage requests.

Procedure

  1. Log in to a Red Hat Ceph Storage Controller node.
  2. Create a file called /tmp/rgw_spec.yaml and add the RGW deployment parameters:

    service_type: rgw
    service_id: rgw
    service_name: rgw.rgw
    placement:
      hosts:
        - <host_1>
        - <host_2>
        ...
        - <host_n>
    networks:
    - <storage_network>
    spec:
      rgw_frontend_port: 8082
      rgw_realm: default
      rgw_zone: default
    ---
    service_type: ingress
    service_id: rgw.default
    service_name: ingress.rgw.default
    placement:
      count: 1
    spec:
      backend_service: rgw.rgw
      frontend_port: 8080
      monitor_port: 8999
      virtual_ips_list:
      - <storage_network_vip>
      - <external_network_vip>
      virtual_interface_networks:
      - <storage_network>
    Copy to Clipboard
    • Replace <host_1>, <host_2>, …, <host_n> with the name of the Ceph nodes where the RGW instances are deployed.
    • Replace <storage_network> with the network range used to resolve the interfaces where radosgw processes are bound.
    • Replace <storage_network_vip> with the virtual IP (VIP) used as the haproxy front end. This is the same address configured as the Object Storage service endpoint ($RGW_ENDPOINT) in the Configuring RGW authentication procedure.
    • Optional: Replace <external_network_vip> with an additional VIP on an external network to use as the haproxy front end. This address is used to connect to RGW from an external network.
  3. Save the file.
  4. Enter the cephadm shell and mount the rgw_spec.yaml file.

    $ cephadm shell -m /tmp/rgw_spec.yaml
    Copy to Clipboard
  5. Add RGW related configuration to the cluster:

    $ ceph config set global rgw_keystone_url "https://<keystone_endpoint>"
    $ ceph config set global rgw_keystone_verify_ssl false
    $ ceph config set global rgw_keystone_api_version 3
    $ ceph config set global rgw_keystone_accepted_roles "member, Member, admin"
    $ ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator"
    $ ceph config set global rgw_keystone_admin_domain default
    $ ceph config set global rgw_keystone_admin_project service
    $ ceph config set global rgw_keystone_admin_user swift
    $ ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD"
    $ ceph config set global rgw_keystone_implicit_tenants true
    $ ceph config set global rgw_s3_auth_use_keystone true
    $ ceph config set global rgw_swift_versioning_enabled true
    $ ceph config set global rgw_swift_enforce_content_length true
    $ ceph config set global rgw_swift_account_in_url true
    $ ceph config set global rgw_trust_forwarded_https true
    $ ceph config set global rgw_max_attr_name_len 128
    $ ceph config set global rgw_max_attrs_num_in_req 90
    $ ceph config set global rgw_max_attr_size 1024
    Copy to Clipboard
    • Replace <keystone_endpoint> with the Identity service internal endpoint. The data plane nodes can resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be either http:// or https://.
    • Replace <swift_password> with the password assigned to the swift user in the previous step.
  6. Deploy the RGW configuration using the Orchestrator:

    $ ceph orch apply -i /mnt/rgw_spec.yaml
    Copy to Clipboard

3.7. Configuring RGW with TLS for an external Red Hat Ceph Storage cluster

Configure RGW with TLS so the control plane services can resolve the external Red Hat Ceph Storage cluster host names.

This procedure configures Ceph RGW to emulate the Object Storage service (swift). It creates a DNS zone and certificate so that a URL such as https://rgw-external.ceph.local:8080 is registered as an Identity service (keystone) endpoint. This enables Red Hat OpenStack Services on OpenShift (RHOSO) clients to resolve the host and trust the certificate.

Because a RHOSO pod needs to securely access an HTTPS endpoint hosted outside of Red Hat OpenShift Container Platform (RHOCP), this process is used to create a DNS domain and certificate for that endpoint.

During this procedure, a DNSData domain is created, ceph.local in the examples, so that pods can map host names to IP addresses for services that are not hosted on RHOCP. DNS forwarding is then configured for the domain with the CoreDNS service. Lastly, a certificate is created using the RHOSO public root certificate authority.

You must copy the certificate and key file created in RHOCP to the nodes hosting RGW so they can become part of the Ceph Orchestrator RGW specification.

Procedure

  1. Create a DNSData custom resource (CR) for the external Ceph cluster.

    Note

    Creating a DNSData CR creates a new dnsmasq pod that is able to read and resolve the DNS information in the associated DNSData CR.

    The following is an example of a DNSData CR:

    apiVersion: network.openstack.org/v1beta1
    kind: DNSData
    metadata:
      labels:
        component: ceph-storage
        service: ceph
      name: ceph-storage
      namespace: openstack
    spec:
      dnsDataLabelSelectorValue: dnsdata
      hosts:
        - hostnames:
          - ceph-rgw-internal-vip.ceph.local
          ip: 172.18.0.2
        - hostnames:
          - ceph-rgw-external-vip.ceph.local
          ip: 10.10.10.2
    Copy to Clipboard
    Note

    In this example, it is assumed that the host at the IP address 172.18.0.2 hosts the Ceph RGW endpoint for access on the private storage network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host name ceph-rgw-internal-vip.ceph.local.

    It is also assumed that the host at the IP address 10.10.10.2 hosts the Ceph RGW endpoint for access on the external network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host name ceph-rgw-external-vip.ceph.local.

    The list of hosts in this example is not a definitive list of required hosts. It is provided for demonstration purposes. Substitute the appropriate hosts for your environment.

  2. Apply the CR to your environment:

    $ oc apply -f <ceph_dns_yaml>
    Copy to Clipboard
    • Replace <ceph_dns_yaml> with the name of the DNSData CR file.
  3. Update the CoreDNS CR with a forwarder to the dnsmasq for requests to the ceph.local domain. For more information about DNS forwarding, see Using DNS forwarding in the RHOCP Networking guide.
  4. List the openstack domain DNS cluster IP:

    $ oc get svc dnsmasq-dns
    Copy to Clipboard

    The following is an example output for this command:

    $ oc get svc dnsmasq-dns
    dnsmasq-dns     LoadBalancer   10.217.5.130   192.168.122.80    53:30185/UDP     160m
    Copy to Clipboard
  5. Record the forwarding information from the command output.
  6. List the CoreDNS CR:

    $ oc -n openshift-dns describe dns.operator/default
    Copy to Clipboard
  7. Edit the CoreDNS CR and update it with the forwarding information.

    The following is an example of a CoreDNS CR updated with forwarding information:

    apiVersion: operator.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: "2024-03-25T02:49:24Z"
      finalizers:
      - dns.operator.openshift.io/dns-controller
      generation: 3
      name: default
      resourceVersion: "164142"
      uid: 860b0e61-a48a-470e-8684-3b23118e6083
    spec:
      cache:
        negativeTTL: 0s
        positiveTTL: 0s
      logLevel: Normal
      nodePlacement: {}
      operatorLogLevel: Normal
      servers:
      - forwardPlugin:
          policy: Random
          upstreams:
          - 10.217.5.130:53
        name: ceph
        zones:
        - ceph.local
      upstreamResolvers:
        policy: Sequential
        upstreams:
        - port: 53
          type: SystemResolvConf
    Copy to Clipboard

    The following is what has been added to the CR:

    ....
       servers:
      - forwardPlugin:
         policy: Random
         upstreams:
         - 10.217.5.130:53 
    1
    
        name: ceph
        zones:
        - ceph.local
    ....
    Copy to Clipboard
    1
    The forwarding information recorded from the oc get svc dnsmasq-dns command.
  8. Create a Certificate CR with the host names from the DNSData CR.

    The following is an example of a Certificate CR:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-ceph-rgw
      namespace: openstack
    spec:
      duration: 43800h0m0s
      issuerRef: {'group': 'cert-manager.io', 'kind': 'Issuer', 'name': 'rootca-public'}
      secretName: cert-ceph-rgw
      dnsNames:
        - ceph-rgw-internal-vip.ceph.local
        - ceph-rgw-external-vip.ceph.local
    Copy to Clipboard
    Note

    The certificate issuerRef is set to the root certificate authority (CA) of RHOSO. This CA is automatically created when the control plane is deployed. The default name of the CA is rootca-public. The RHOSO pods trust this new certificate because the root CA is used.

  9. Apply the CR to your environment:

    $ oc apply -f <ceph_cert_yaml>
    Copy to Clipboard
    • Replace <ceph_cert_yaml> with the name of the Certificate CR file.
  10. Extract the certificate and key data from the secret created when the Certificate CR was applied:

    $ oc get secret <ceph_cert_secret_name> -o yaml
    Copy to Clipboard
    • Replace <ceph_cert_secret_name> with the name used in the secretName field of your Certificate CR.

      Note

      This command outputs YAML with a data section that looks like the following:

      [stack@osp-storage-04 ~]$ oc get secret cert-ceph-rgw -o yaml
      apiVersion: v1
      data:
        ca.crt: <CA>
        tls.crt: <b64cert>
        tls.key: <b64key>
      kind: Secret
      Copy to Clipboard

      The <b64cert> and <b64key> values are the base64-encoded certificate and key strings that you must use in the next step.

  11. Extract and base64 decode the certificate and key information obtained in the previous step and save a concatenation of them in the Ceph Object Gateway service specification.

    The rgw section of the the specification file looks like the following:

      service_type: rgw
      service_id: rgw
      service_name: rgw.rgw
      placement:
        hosts:
        - host1
        - host2
      networks:
        - 172.18.0.0/24
      spec:
        rgw_frontend_port: 8082
        rgw_realm: default
        rgw_zone: default
        ssl: true
        rgw_frontend_ssl_certificate: |
        -----BEGIN CERTIFICATE-----
        MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw
        <redacted>
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB
        <redacted>
        -----END RSA PRIVATE KEY-----
    Copy to Clipboard

    The ingress section of the specification file looks like the following:

      service_type: ingress
      service_id: rgw.default
      service_name: ingress.rgw.default
      placement:
        count: 1
      spec:
        backend_service: rgw.rgw
        frontend_port: 8080
        monitor_port: 8999
        virtual_interface_networks:
        - 172.18.0.0/24
        virtual_ip: 172.18.0.2/24
        ssl_cert: |
        -----BEGIN CERTIFICATE-----
        MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw
        <redacted>
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB
        <redacted>
        -----END RSA PRIVATE KEY-----
    Copy to Clipboard

    In the above examples, the rgw_frontend_ssl_certificate and ssl_cert contain the base64 decoded values from both the <b64cert> and <b64key> in the previous step with no spaces in between.

  12. Use the procedure Deploying the Ceph Object Gateway using the service specification to deploy Ceph RGW with SSL.
  13. Connect to the openstackclient pod.
  14. Verify that the forwarding information has been successfully updated.

    $ curl --trace - <host_name>
    Copy to Clipboard
    • Replace <host_name> with the name of the external host previously added to the DNSData CR.

      Note

      The following is an example output from this command where the openstackclient pod successfully resolved the host name, and no SSL verification errors were encountered.

      sh-5.1$ curl https://rgw-external-vip.ceph.local:8080
      <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
      .1$
      sh-5.1$
      Copy to Clipboard

3.8. Enabling deferred deletion for volumes or images with dependencies

When you use Ceph RBD as a back end for the Block Storage service (cinder) or the Image service (glance), you can enable deferred deletion in the Ceph RBD Clone v2 API.

With deferred deletion, you can delete a volume from the Block Storage service or an image from the Image service, even if Ceph RBD volumes or snapshots depend on them. For example, COW clones created in different storage pools by the Block Storage service or the Compute service (nova). The volume is deleted from the Block Storage service or the image is deleted from the Image service, but it is still stored in a trash area in Ceph RBD for dependencies. The volume or image is only deleted from Ceph RBD when there are no dependencies.

Note

The trash area maintained by deferred deletion does not provide restoration functionality. When volumes or images are moved to the trash area, they cannot be recovered or restored. The trash area serves only as a holding mechanism for the volume or image until all dependencies have been removed. The volume or image will be permanently deleted once no dependencies exist.

Limitations

  • When you enable Clone v2 deferred deletion in existing environments, the feature only applies to new volumes or images.

Procedure

  1. Verify which Ceph version the clients in your Ceph Storage cluster are running:

    $ cephadm shell -- ceph osd get-require-min-compat-client
    Copy to Clipboard

    Example output:

    luminous
    Copy to Clipboard
  2. To set the cluster to use the Clone v2 API and the deferred deletion feature by default, set min-compat-client to mimic. Only clients in the cluster that are running Ceph version 13.2.x (Mimic) can access images with dependencies:

    $ cephadm shell -- ceph osd set-require-min-compat-client mimic
    Copy to Clipboard
  3. Schedule an interval for trash purge in minutes by using the m suffix:

    $ rbd trash purge schedule add --pool <pool> <30m>
    Copy to Clipboard
    • Replace <pool> with the name of the associated storage pool, for example, volumes in the Block Storage service.
    • Replace <30m> with the interval in minutes that you want to specify for trash purge.
  4. Verify a trash purge schedule has been set for the pool:

    $ rbd trash purge schedule list --pool <pool>
    Copy to Clipboard

3.9. Troubleshooting Red Hat Ceph Storage RBD integration

The Compute (nova), Block Storage (cinder), and Image (glance) services can integrate with Red Hat Ceph Storage RBD to use it as a storage back end. If this integration does not work as expected, you can perform an incremental troubleshooting procedure to progressively eliminate possible causes.

The following example shows how to troubleshoot an Image service integration. You can adapt the same steps to troubleshoot Compute and Block Storage service integrations.

Note

If you discover the cause of your issue before completing this procedure, it is not necessary to do any subsequent steps. You can exit this procedure and resolve the issue.

Procedure

  1. Determine if any parts of the control plane are not properly deployed by assessing whether the Ready condition is not True:

    $ oc get -n openstack OpenStackControlPlane \
      -o jsonpath="{range .items[0].status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
    Copy to Clipboard
    1. If you identify a service that is not properly deployed, check the status of the service.

      The following example checks the status of the Compute service:

      $ oc get -n openstack Nova/nova \
        -o jsonpath="{range .status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
      Copy to Clipboard
      Note

      You can check the status of all deployed services with the command oc get pods -n openstack and the logs of a specific service with the command oc logs -n openstack <service_pod_name>. Replace <service_pod_name> with the name of the service pod you want to check.

    2. If you identify an operator that is not properly deployed, check the status of the operator:

      $ oc get pods -n openstack-operators -lopenstack.org/operator-name
      Copy to Clipboard
      Note

      Check the operator logs with the command oc logs -n openstack-operators -lopenstack.org/operator-name=<operator_name>.

  2. Check the Status of the data plane deployment:

    $ oc get -n openstack OpenStackDataPlaneDeployment
    Copy to Clipboard
    1. If the Status of the data plane deployment is False, check the logs of the associated Ansible job:

      $ oc logs -n openstack job/<ansible_job_name>
      Copy to Clipboard

      Replace <ansible_job_name> with the name of the associated job. The job name is listed in the Message field of oc get -n openstack OpenStackDataPlaneDeployment command.

  3. Check the Status of the data plane node set deployment:

    $ oc get -n openstack OpenStackDataPlaneNodeSet
    Copy to Clipboard
    1. If the Status of the data plane node set deployment is False, check the logs of the associated Ansible job:

      $ oc logs -n openstack job/<ansible_job_name>
      Copy to Clipboard
      • Replace <ansible_job_name> with the name of the associated job. It is listed in the Message field of oc get -n openstack OpenStackDataPlaneNodeSet command.
  4. If any pods are in the CrashLookBackOff state, you can duplicate them for troublehooting purposes with the oc debug command:

    oc debug <pod_name>
    Copy to Clipboard

    Replace <pod_name> with the name of the pod to duplicate.

    Tip

    You can also use the oc debug command in the following object debugging activities:

    • To run /bin/sh on a container other than the first one, the commands default behavior, using the command form oc debug -container <pod_name> <container_name>. This is useful for pods like the API where the first container is tailing a file and the second container is the one you want to debug. If you use this command form, you must first use the command oc get pods | grep <search_string> to find the container name.
    • To route traffic to the pod during the debug process, use the command form oc debug <pod_name> --keep-labels=true.
    • To create any resource that creates pods such as Deployments, StatefulSets, and Nodes, use the command form oc debug <resource_type>/<resource_name>. An example of creating a StatefulSet would be oc debug StatefulSet/cinder-scheduler.
  5. Connect to the pod and confirm that the ceph.client.openstack.keyring and ceph.conf files are present in the /etc/ceph directory.

    Note

    If the pod is in a CrashLookBackOff state, use the oc debug command as described in the previous step to duplicate the pod and route traffic to it.

    $ oc rsh <pod_name>
    Copy to Clipboard
    • Replace <pod_name> with the name of the applicable pod.

      Tip

      If the Ceph configuration files are missing, check the extraMounts parameter in your OpenStackControlPlane CR.

  6. Confirm the pod has a network connection to the Red Hat Ceph Storage cluster by connecting to the IP and port of a Ceph Monitor from the pod. The IP and port information is located in /etc/ceph.conf.

    The following is an example of this process:

    $ oc get pods | grep glance | grep external-api-0
    glance-06f7a-default-external-api-0                               3/3     Running     0              2d3h
    $ oc debug --container glance-api glance-06f7a-default-external-api-0
    Starting pod/glance-06f7a-default-external-api-0-debug-p24v9, command was: /usr/bin/dumb-init --single-child -- /bin/bash -c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start
    Pod IP: 192.168.25.50
    If you don't see a command prompt, try pressing enter.
    sh-5.1# cat /etc/ceph/ceph.conf
    # Ansible managed
    
    [global]
    
    fsid = 63bdd226-fbe6-5f31-956e-7028e99f1ee1
    mon host = [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
    
    
    [client.libvirt]
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/ceph/qemu-guest-$pid.log
    
    sh-5.1# python3
    Python 3.9.19 (main, Jul 18 2024, 00:00:00)
    [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import socket
    >>> s = socket.socket()
    >>> ip="192.168.122.100"
    >>> port=3300
    >>> s.connect((ip,port))
    >>>
    Copy to Clipboard
    Tip

    Troubleshoot the network connection between the cluster and pod if you cannot connect to a Ceph Monitor. The previous example uses a Python socket to connect to the IP and port of the Red Hat Ceph Storage cluster from the ceph.conf file.

    There are two potential outcomes from the execution of the s.connect((ip,port)) function:

    • If the command executes successfully and there is no error similar to the following example, the network connection between the pod and cluster is functioning correctly. If the connection is functioning correctly, the command execution will provide no return value at all.
    • If the command takes a long time to execute and returns an error similar to the following example, the network connection between the pod and cluster is not functioning correctly. It should be investigated further to troubleshoot the connection.
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    TimeoutError: [Errno 110] Connection timed out
    Copy to Clipboard
  7. Examine the cephx key as shown in the following example:

    bash-5.1$ cat /etc/ceph/ceph.client.openstack.keyring
    [client.openstack]
       key = "<redacted>"
       caps mgr = allow *
       caps mon = profile rbd
       caps osd = profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images
    bash-5.1$
    Copy to Clipboard
  8. List the contents of a pool from the caps osd parameter as shown in the following example:

    $ /usr/bin/rbd --conf /etc/ceph/ceph.conf \
    --keyring /etc/ceph/ceph.client.openstack.keyring \
    --cluster ceph --id openstack \
    ls -l -p <pool_name> | wc -l
    Copy to Clipboard
    • Replace <pool_name> with the name of the required Red Hat Ceph Storage pool.

      Tip

      If this command returns the number 0 or greater, the cephx key provides adequate permissions to connect to, and read information from, the Red Hat Ceph Storage cluster.

      If this command does not complete but network connectivity to the cluster was confirmed, work with the Ceph administrator to obtain the correct cephx keyring.

      Additionally, it is possible there is an MTU mismatch on the Storage network. If the network is using jumbo frames (an MTU value of 9000), all switch ports between servers using the interface must be updated to support jumbo frames. If this change is not made on the switch, problems can occur at the Ceph application layer. Verify all hosts using the network can communicate at the desired MTU with a command such as ping -M do -s 8972 <ip_address>.

  9. Send test data to the images pool on the Ceph cluster.

    The following is an example of performing this task:

    # DATA=$(date | md5sum | cut -c-12)
    # POOL=images
    # RBD="/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
    # $RBD create --size 1024 $POOL/$DATA
    Copy to Clipboard
    Tip

    It is possible to be able to read data from the cluster but not have permissions to write data to it, even if write permission was granted in the cephx keyring. If write permissions have been granted but you cannot write data to the cluster, this may indicate the cluster is overloaded and not able to write new data.

    In the example, the rbd command did not complete successfully and was canceled. It was subsequently confirmed the cluster itself did not have the resources to write new data. The issue was resolved on the cluster itself. There was nothing incorrect with the client configuation.

3.10. Troubleshooting Red Hat Ceph Storage clients

Put Red Hat OpenStack Services on OpenShift (RHOSO) Ceph clients in debug mode to troubleshoot their operation.

Procedure

  1. Locate the Red Hat Ceph Storage configuration file mapped in the Red Hat OpenShift secret created in Creating a Red Hat Ceph Storage secret.
  2. Modify the contents of the configuration file to include troubleshooting-related configuration.

    The following is an example of troubleshooting-related configuration:

    [client.openstack]
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/guest-$pid.log
    debug ms = 1
    debug rbd = 20
    log to file = true
    Copy to Clipboard
    Note

    This is not an exhaustive example of troubleshooting-related configuration. For more information, see Troubleshooting Red Hat Ceph Storage.

  3. Update the secret with the new content.

3.11. Customizing and managing Red Hat Ceph Storage

Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7 and 8. For information on the customization and management of Red Hat Ceph Storage 7 and 8, refer to the applicable documentation sets:

The following guides contain key information and procedures for these tasks:

Chapter 4. Configuring the Block Storage service (cinder)

The Block Storage service (cinder) provides access to remote block storage devices through volumes to provide persistent storage. The Block Storage service has three mandatory services; api, scheduler, and volume; and one optional service, backup.

Note

As a security hardening measure, the Block Storage services run as the cinder user.

All Block Storage services use the cinder section of the OpenStackControlPlane custom resource (CR) for their configuration:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
Copy to Clipboard

Global configuration options are applied directly under the cinder and template sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    <global-options>
    template:
      <global-options>
      cinderAPI:
        <cinder-api-options>
      cinderScheduler:
        <cinder-scheduler-options>
      cinderVolumes:
        <name1>: <cinder-volume-options>
        <name2>: <cinder-volume-options>
      cinderBackup:
        <cinder-backup-options>
Copy to Clipboard

4.1. Terminology

The following terms are important to understanding the Block Storage service (cinder):

  • Storage back end: A physical storage system where volume data is stored.
  • Cinder driver: The part of the Block Storage service that enables communication with the storage back end. It is configured with the volume_driver and backup_driver options.
  • Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the volume_backend_name option.
  • Storage pool: A logical grouping of volumes in a given storage back end.
  • Cinder pool: A representation in the Block Storage service of a storage pool.
  • Volume host: The way the Block Storage service address volumes. There are two different representations, short (<hostname>@<backend-name>) and full (<hostname>@<backend-name>#<pool-name>).
  • Quota: Limits defined per project to constrain the use of Block Storage specific resources.

4.2. Block Storage service (cinder) enhancements in Red Hat OpenStack Services on OpenShift (RHOSO)

The following functionality enhancements have been integrated into the Block Storage service:

  • Ease of deployment for multiple volume back ends.
  • Back end deployment does not affect running volume back ends.
  • Back end addition and removal does not affect running back ends.
  • Back end configuration changes do not affect other running back ends.
  • Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers.
  • Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality.
  • Improved methods for troubleshooting the service code.

4.3. Configuring transport protocols

Deployments use different transport protocols to connect to volumes. The Block Storage service (cinder) supports the following transport protocols:

  • iSCSI
  • Fibre Channel (FC)
  • NVMe over TCP (NVMe-TCP)
  • NFS
  • Red Hat Ceph Storage RBD

Control plane services that use volumes, such as the Block Storage volume and backup services, might require the support of the Red Hat OpenShift Container Platform (RHOCP) cluster to use iscsid and multipathd modules, depending on the storage array in use. These modules must be available on all nodes where these volume-dependent services execute. To use these transport protocols, create a MachineConfig CR to define where these modules execute. For more information on a MachineConfig, see Understanding the Machine Config operator.

Important

Using a MachineConfig CR to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a 'MachineConfig` CR to ensure the integrity of RHOCP workloads.

The procedures in this section provide a general configuration of these protocols and are not vendor-specific.

If your deployment requires multipathing, then you must configure this separately, see Configuring multipathing.

Note

The Block Storage volume and backup services are automatically started on data plane nodes.

4.3.1. Configuring the iSCSI protocol

Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig to the applicable nodes to configure nodes to use the iSCSI protocol.

Note

If the iscsid service module is already running, this procedure is not required.

Procedure

  1. Create a MachineConfig CR to configure the nodes for the iscsid module.

    The following example starts the iscsid service with a default configuration in all RHOCP worker nodes:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-enable-iscsid
    spec:
      config:
        ignition:
          version: 3.2.0
        systemd:
          units:
          - enabled: true
            name: iscsid.service
    Copy to Clipboard
  2. Save the file.
  3. Apply the MachineConfig CR file.

    $ oc apply -f <machine_config_file> -n openstack
    Copy to Clipboard
    • Replace <machine_config_file> with the name of your MachineConfig CR file.

4.3.2. Configuring the Fibre Channel protocol

There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end.

4.3.3. Configuring the NVMe over TCP (NVMe-TCP) protocol

Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme kernel modules.

Procedure

  1. Create a MachineConfig CR to configure the nodes for the nvme kernel modules.

    The following example starts the nvme kernel modules with a default configuration in all RHOCP worker nodes:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-load-nvme-fabrics
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/modules-load.d/nvme_fabrics.conf
              overwrite: false
              mode: 420
              user:
                name: root
              group:
                name: root
              contents:
                source: data:,nvme-fabrics%0Anvme-tcp
    Copy to Clipboard
  2. Save the file.
  3. Apply the MachineConfig CR file.

    $ oc apply -f <machine_config_file> -n openstack
    Copy to Clipboard
    • Replace <machine_config_file> with the name of your MachineConfig CR file.
  4. After the nodes have rebooted, verify the nvme-fabrics module are loaded and support ANA on a host:

    cat /sys/module/nvme_core/parameters/multipath
    Copy to Clipboard
    Note

    Even though ANA does not use the Linux Multipathing Device Mapper, multipathd must be running for the Compute nodes to be able to use multipathing when connecting volumes to instances.

4.4. Configuring multipathing

You can configure multipathing in Red Hat OpenStack Services on OpenShift (RHOSO) to create redundancy or to improve performance.

  • You must configure multipathing on control plane nodes, by creating a MachineConfig CR.

    Note

    In RHOSO deployments, the use_multipath_for_image_xfer configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.

  • Multipathing on data plane nodes is configured by default in RHOSO, which configures the default multipath parameters. You must add and configure any vendor-specific multipath parameters that your production environment requires.

4.4.1. Configuring multipathing on control plane nodes

Configuring multipathing on Red Hat OpenShift Container Platform (RHOCP) control plane nodes requires a MachineConfig custom resource (CR) that creates a multipath configuration file and starts the service.

In Red Hat OpenStack Services on OpenShift (RHOSO) deployments, the use_multipath_for_image_xfer configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.

Note

The example provided in this procedure implements a minimal multipath configuration file, which configures the default multipath parameters. However, your production deployment might also require vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment.

Procedure

  1. Create a MachineConfig CR to create a multipath configuration file and to start the multipathd module on all control plane nodes.

    The following example creates a MachineConfig CR named 99-worker-cinder-enable-multipathd that implements a multipath configuration file named multipath.conf:

    Important

    When adding vendor-specific multipath parameters to the contents: of this file, ensure that you do not change the specified values of the following default multipath parameters: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-enable-multipathd
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/multipath.conf
              overwrite: false
              mode: 384
              user:
                name: root
              group:
                name: root
              contents:
                source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D
        systemd:
          units:
          - enabled: true
            name: multipathd.service
    Copy to Clipboard
    Note

    The contents: data above, represents the following literal multipath.conf file contents:

    defaults {
      user_friendly_names no
      recheck_wwid yes
      skip_kpartx yes
      find_multipaths yes
    }
    
    blacklist {
    }
    Copy to Clipboard
  2. Save the MachineConfig CR file, for example, 99-worker-cinder-enable-multipathd.yaml.
  3. Apply the MachineConfig CR file.

    $ oc apply -f 99-worker-cinder-enable-multipathd.yaml -n openstack
    Copy to Clipboard

4.4.2. Configuring custom multipath parameters on data plane nodes

Default multipath parameters are configured on all data plane nodes. You must add and configure any vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment, to create your custom multipath configuration file.

Important

Ensure that you do not add the following default multipath parameters and overwrite their values: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.

You must modify the relevant OpenStackDataPlaneNodeSet custom resource (CR), to update the data plane node configuration to include your vendor-specific multipath parameters. You create an OpenStackDataPlaneDeployment CR that deploys and applies the modified OpenStackDataPlaneNodeSet CR to the data plane.

Prerequisites

  • You have created your custom multipath configuration file that contains only the vendor-specific multipath parameters and your deployment-specific values.

Procedure

  1. Create a secret to store your custom multipath configuration file:

    $ oc create secret generic <secret_name> \
    --from-file=<configuration_file_name>
    Copy to Clipboard
    • Replace <secret_name> with the name that you want to assign to the secret, for example, custom-multipath-file.
    • Replace <configuration_file_name> with the name of the custom multipath configuration file that you created, for example, custom_multipath.conf.
  2. Open the OpenStackDataPlaneNodeSet CR file for the node set that you want to update, for example, openstack_data_plane.yaml.
  3. Add an extraMounts attribute to the OpenStackDataPlaneNodeSet CR file to include your vendor-specific multipath parameters:

    spec:
        ...
        nodeTemplate:
            ...
            extraMounts:
            - extraVolType: <optional_volume_type_description>
              volumes:
              - name: <mounted_volume_name>
                secret:
                  secretName: <secret_name>
              mounts:
              - name: <mounted_volume_name>
                mountPath: "/runner/multipath"
                readOnly: true
    Copy to Clipboard
    • Optional: Replace <optional_volume_type_description> with a description of the type of the mounted volume, for example, multipath-config-file.
    • Replace <mounted_volume_name> with the name of the mounted volume, for example, custom-multipath.

      Note

      Do not change the value of the mountPath: parameter from "/runner/multipath".

  4. Save the OpenStackDataPlaneNodeSet CR file.
  5. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
    Copy to Clipboard
  6. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
    Copy to Clipboard

    When the status is SetupReady, the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  7. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    Copy to Clipboard
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character, for example, openstack_data_plane_deploy.
  8. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - <nodeSet_name>
    Copy to Clipboard
  9. Save the OpenStackDataPlaneDeployment CR deployment file, for example, openstack_data_plane_deploy.yaml.
  10. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack
    Copy to Clipboard

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10
    Copy to Clipboard

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
    Copy to Clipboard

Verification

  • Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready
    Copy to Clipboard

    For information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information about troubleshooting the deployment, see Troubleshooting the data plane creation and deployment in Deploying Red Hat OpenStack Services on OpenShift.

4.5. Configuring initial defaults

The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig section. Once deployed, these initial defaults are modified using the openstack client.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the Block Storage service global configuration.

    The following example demonstrates a Block Storage service initial configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        enabled: true
        template:
          customServiceConfig: |
            [DEFAULT]
            quota_volumes = 20
            quota_snapshots = 15
    Copy to Clipboard

    For a complete list of all initial default parameters, see Initial default parameters.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.5.1. Initial default parameters

These initial default parameters should be configured when the service is first enabled.

ParameterDescription

default_volume_type

Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is __DEFAULT__.

no_snapshot_gb_quota

Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is false, which means that the size of the snapshots are included in the gigabyle quota.

per_volume_size_limit

Provides the maximum size of each volume in gigabytes. The default is -1 (unlimited).

quota_volumes

Provides the number of volumes allowed for each project. The default value is 10.

quota_snapshots

Provides the number of snapshots allowed for each project. The default value is 10.

quota_groups

Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is 10.

quota_gigabytes

Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the no_snapshot_gb_quota initial parameter this might also include the size of the snapshots. The default values, also count the size of the snapshots against this limit of 1000 GB.

quota_backups

Provides the number backups allowed for each project. The default value is 10.

quota_backup_gigabytes

Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is 1000.

4.6. Configuring the API service

The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other RHOSO services.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer.

    The following example demonstrates a load balancer configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
    Copy to Clipboard
  3. Edit the CR file and add the configuration for the number of API service replicas. Run the cinderAPI service in an Active-Active configuration with three replicas.

    The following example demonstrates configuring the cinderAPI service to use three replicas:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderAPI:
            replicas: 3
    Copy to Clipboard
  4. Edit the CR file and configure cinderAPI options. These options are configured in the customServiceConfig section under the cinderAPI section.

    The following example demonstrates configuring cinderAPI service options and enabling debugging on all services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug = true
          cinderAPI:
            customServiceConfig: |
              [DEFAULT]
              osapi_volume_workers = 3
    Copy to Clipboard

    For a listing of commonly used cinderAPI service option parameters, see API service option parameters.

  5. Save the file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.6.1. API service option parameters

API service option parameters are provided for the configuration of the cinderAPI portions of the Block Storage service.

ParameterDescription

api_rate_limit

Provides a value to determine if the API rate limit is enabled. The default is false.

debug

Provides a value to determine whether the logging level is set to DEBUG instead of the default of INFO. The default is false. The logging level can be dynamically set without restarting.

osapi_max_limit

Provides a value for the maximum number of items a collection resource returns in a single response. The default is 1000.

osapi_volume_workers

Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available.

4.7. Configuring the scheduler service

The Block Storage service (cinder) has a scheduler service (cinderScheduler) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations.

Use only a single instance of cinderScheduler for scheduling consistency and ease of troubleshooting. While cinderScheduler can be run with multiple instances, the service default replicas: 1 is the best practice.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the service down detection timeouts.

    The following example demonstrates this configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            report_interval = 20 
    1
    
            service_down_time = 120 
    2
    Copy to Clipboard
    1
    The number of seconds between Block Storage service components reporting an operational state in the form of a heartbeat through the database. The default is 10.
    2
    The maximum number of seconds since the last heartbeat from the component for it to be considered non-operational. The default is 60.
    Note

    Configure these values at the cinder level of the CR instead of the cinderScheduler so that these values are applied to all components consistently.

  3. Edit the CR file and add the configuration for the statistics reporting interval.

    The following example demonstrates configuring these values at the cinder level to apply them globally to all services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            backend_stats_polling_interval = 120 
    1
    
            backup_driver_stats_polling_interval = 120 
    2
    Copy to Clipboard
    1
    The number of seconds between requests from the volume for usage statistics from the back end. The default is 60
    2
    The number of seconds between requests from the volume for usage statistics from backup service. The default is 60.

    The following example demonstrates configuring these values at the cinderVolume and cinderBackup level to customize settings at the service level.

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderBackup:
            customServiceConfig: |
              [DEFAULT]
              backup_driver_stats_polling_interval = 120 
    1
    
              < rest of the config >
          cinderVolumes:
            nfs:
              customServiceConfig: |
                [DEFAULT]
                backend_stats_polling_interval = 120 
    2
    Copy to Clipboard
    1
    The number of seconds between requests from the volume for usage statistics from the back end. The default is 60
    2
    The number of seconds between requests from the volume for usage statistics from backup service. The default is 60.
    Note

    The generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends.

  4. Perform any additional configuration necessary to customize the cinderScheduler service.

    For more configuration options for the customization of the cinderScheduler service, see Scheduler service parameters.

  5. Save the file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.7.1. Scheduler service parameters

Scheduler service parameters are provided for the configuration of the cinderScheduler portions of the Block Storage service

ParameterDescription

debug

Provides a setting for the logging level. When this parameter is true the logging level is set to DEBUG instead of INFO. The default is false.

scheduler_max_attempts

Provides a setting for the maximum number of attempts to schedule a volume. The default is 3

scheduler_default_filters

Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter.

scheduler_default_weighers

Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is CapacityWeigher.

scheduler_weight_handler

Provides a setting for a handler to use for selecting the host or pool after weighing. The value cinder.scheduler.weights.OrderedHostWeightHandler selects the first host from the list of hosts that passed filtering and the value cinder.scheduler.weights.stochastic.stochasticHostWeightHandler gives every pool a chance to be chosen where the probability is proportional to each pool weight. The default is cinder.scheduler.weights.OrderedHostWeightHandler.

The following is an explanation of the filter class names from the parameter table:

  • AvailabilityZoneFilter

    • Filters out all back ends that do not meet the availability zone requirements of the requested volume.
  • CapacityFilter

    • Selects only back ends with enough space to accommodate the volume.
  • CapabilitiesFilter

    • Selects only back ends that can support any specified settings in the volume.
  • InstanceLocality

    • Configures clusters to use volumes local to the same node.

4.8. Configuring the volume service

The Block Storage service (cinder) has a volume service (cinderVolumes section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots.

This service requires access to the storage back end (storage) and storage management (storageMgmt) networks in the networkAttachments of the OpenStackControlPlane CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access.

Volume service configuration is performed in the cinderVolumes section with parameters set in the customServiceConfig, customServiceConfigSecrets, networkAttachments, replicas, and the nodeSelector sections.

The volume service cannot have multiple replicas.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for your back end.

    The following example demonstrates the service configuration for a Red Hat Ceph Storage back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug = true
          cinderVolumes:
            ceph: 
    1
    
              networkAttachments: 
    2
    
              - storage
              customServiceConfig: |
                [ceph]
                volume_backend_name = ceph 
    3
    
                volume_driver = cinder.volume.drivers.rbd.RBDDriver 
    4
    Copy to Clipboard
    1
    The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends.
    2
    The configuration area for the back end network connections.
    3
    The name assigned to this back end.
    4
    The driver used to connect to this back end.

    For a list of commonly used volume service parameters, see Volume service parameters.

  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.8.1. Volume service parameters

Volume service parameters are provided for the configuration of the cinderVolumes portions of the Block Storage service

ParameterDescription

backend_availability_zone

Provides a setting for the availability zone of the back end. This is set in the [DEFAULT] section. The default value is storage_availability_zone.

volume_backend_name

Provides a setting for the back end name for a given driver implementation. There is no default value.

volume_driver

Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value.

enabled_backends

Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a volume_backend_name option.

image_conversion_dir

Provides a setting for a directory used for temporary storage during image conversion. The default value is /var/lib/cinder/conversion.

backend_stats_polling_interval

Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is 60.

4.8.2. Block Storage service (cinder) back ends

Each Block Storage service back end should have an individual configuration section in the cinderVolumes section. This ensures each back end runs in a dedicated pod. This approach has the following benefits:

  • Increased isolation.
  • Adding and removing back ends is fast and does not affect other running back ends.
  • Configuration changes do not affect other running back ends.
  • Automatically spreads the Volume pods into different nodes.

Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols. Storage protocol information should also be provided in individual vendor installation guides.

Note

Configure each back end with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the best practice.

No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment.

All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog.

For more information on integrating and certifying vendor drivers, see Integrating partner content.

For information on Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a Hyperconverged Infrastructure environment.

For information on configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end.

Note

Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver.

4.8.3. Multiple Block Storage service (cinder) back ends

Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes configuration section. Each back end runs in an independent pod.

The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      cinderVolumes:
        nfs:
          networkAttachments:
          - storage
          customServiceConfigSecrets:
          - cinder-volume-nfs-secrets
          customServiceConfig: |
        	[nfs]
        	volume_backend_name=nfs
        iSCSI:
          networkAttachments:
          - storage
          - storageMgmt
          customServiceConfig: |
        	[iscsi]
        	volume_backend_name=iscsi
Copy to Clipboard

4.9. Configuring back end availability zones

Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling.

For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware.

Note

Post-deployment, AZs are created using the RESKEY:availability_zones volume type extra specification.

Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the AZ configuration.

    The following example demonstrates an AZ configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
    name: openstack
    spec:
      cinder:
        template:
          cinderVolumes:
            nfs:
              networkAttachments:
              - storage
              - storageMgmt
              customServiceConfigSecrets:
              - cinder-volume-nfs-secrets
              customServiceConfig: |
                    [nfs]
                    volume_backend_name=nfs
                    backend_availability_zone=zone1 
    1
    
            iSCSI:
              networkAttachments:
              - storage
              - storageMgmt
              customServiceConfig: |
                    [iscsi]
                    volume_backend_name=iscsi
                    backend_availability_zone=zone2
    Copy to Clipboard
    1
    The availability zone associated with the back end.
  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.10. Configuring a generic NFS back end

The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups.

The Block Storage service supports a generic NFS solution with the following caveats:

  • Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
  • For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS.
  • RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the customServiceConfig in the specific back-end configuration with the following parameters:

    nas_secure_file_operation=false
    nas_secure_file_permissions=false
    Copy to Clipboard
  • Do not configure the nfs_mount_options option. The default value is the best NFS options for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.

Procedure

  1. Create a Secret CR to store the volume connection information.

    The following is an example of a Secret CR:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cinder-volume-nfs-secrets 
    1
    
    type: Opaque
    stringData:
      cinder-volume-nfs-secrets: |
    	[nfs]
    	nas_host=192.168.130.1
    	nas_share_path=/var/nfs/cinder
    Copy to Clipboard
    1
    The name used when including it in the cinderVolumes back end configuration.
  2. Save the file.
  3. Update the control plane:

    $ oc apply -f <secret_file_name> -n openstack
    Copy to Clipboard
    • Replace <secret_file_name> with the name of the file that contains your Secret CR.
  4. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  5. Edit the CR file and add the configuration for the generic NFS back end.

    The following example demonstrates this configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderVolumes:
            nfs:
              networkAttachments: 
    1
    
              - storage
              customServiceConfig: |
                [nfs]
                volume_backend_name=nfs
                volume_driver=cinder.volume.drivers.nfs.NfsDriver
                nfs_snapshot_support=true
                nas_secure_file_operations=false
                nas_secure_file_permissions=false
              customServiceConfigSecrets:
              - cinder-volume-nfs-secrets 
    2
    Copy to Clipboard
    1
    The storageMgmt network is not listed because generic NFS does not have a management interface.
    2
    The name from the Secret CR.
    Note

    If you are configuring multiple generic NFS back ends, ensure each is in an individual configuration section so that one pod is devoted to each back end.

  6. Save the file.
  7. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  8. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.11. Configuring an NFS conversion directory

When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the directory for converting large Image service (glance) images.

    The following example demonstrates how to configure this conversion directory:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
          extraVol:
            - propagation:
              - CinderVolume
              volumes:
              - name: cinder-conversion
                nfs:
                    path: <nfs_share_path>
                    server: <nfs_server>
              mounts:
              - name: cinder-conversion
                mountPath: /var/lib/cinder/conversion
                readOnly: true
    Copy to Clipboard
    • Replace <nfs_share_path> with the path to the conversion directory.

      Note

      The Block Storage volume service (cinder-volume) runs as the cinder user. The cinder user requires write permission for <nfs_share_path>. You can configure this by running the following command on the NFS server: $ chown 42407:42407 <nfs_share_path>.

    • Replace <nfs_server> with the IP address of the NFS server that hosts the conversion directory.
    Note

    This example demonstrates how to create a common conversion directory that all the volume service pods use.

    You can also define a conversion directory for each volume service pod:

    • Define each conversion directory by using an extraMounts section, as demonstrated above, in the cinder section of the OpenStackControlPlane CR file.
    • Set the propagation value to the name of the specific Volume section instead of CinderVolume.
  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  5. Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.12. Configuring automatic database cleanup

The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources.

These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs.

Procedure

  1. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  2. Add the dbPurge parameter to the cinder template to configure database cleanup depending on the service you want to configure.

    The following is an example of using the dbPurge parameter to configure the Block Storage service:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          dbPurge:
            age: 20 
    1
    
            schedule: 1 0 * * 0 
    2
    Copy to Clipboard
    1
    The number of days a record has been marked for deletion before it is purged. The default value is 30. The minimum value is 1.
    2
    The schedule of when to run the job in a crontab format. The default value is 1 0 * * *. This default value is equivalent to 00:01 daily.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml
    Copy to Clipboard

4.13. Preserving jobs

The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs.

If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob field in your OpenStackControlPlane CR to stop the automatic removal of jobs and preserve them.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      preserveJobs: true
Copy to Clipboard

4.14. Resolving hostname conflicts

Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN.

Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead.

These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0 * For backups: cinder-backup-<replica-number>

If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames field.

When you set the uniquePodNames field to true, a short hash is added to the pod names, which addresses hostname conflicts.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    uniquePodNames: true
Copy to Clipboard

4.15. Using other container images

Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed using a container image for a specific release and version. There are times when a deployment requires a container image other than the one produced for that release and version. The most common reasons for this are:

  • Deploying a hotfix.
  • Using a certified, vendor-provided container image.

The container images used by the installer are controlled through the OpenStackVersion CR. An OpenStackVersion CR is automatically created by the openstack operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane CR but after the openstack operator is installed. This allows for the container image for any service and component to be individually designated.

The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI, cinderScheduler, and cinderBackup pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes.

The following example demonstrates a OpenStackControlPlane configuration with two back ends; one called ceph and one called custom-fc. The custom-fc backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      cinderVolumes:
        ceph:
          networkAttachments:
          - storage
< . . . >
        custom-fc:
          networkAttachments:
          - storage
Copy to Clipboard

The following example demonstrates what our OpenStackVersion CR might look like in order to set up the container images properly.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
  name: openstack
spec:
  customContainerImages:
    cinderAPIImages: <custom-api-image>
    cinderBackupImages: <custom-backup-image>
    cinderSchedulerImages: <custom-scheduler-image>
    cinderVolumeImages:
      custom-fc: <vendor-volume-volume-image>
Copy to Clipboard
  • Replace <custom-api-image> with the name of the API service image to use.
  • Replace <custom-backup-image> with the name of the Backup service image to use.
  • Replace <custom-scheduler-image> with the name of the Scheduler service image to use.
  • Replace <vendor-volume-volume-image> with the name of the certified, vendor-provided image to use.
Note

The name attribute in your OpenStackVersion CR must match the same attribute in your OpenStackControlPlane CR.

Chapter 5. Configuring the Block Storage backup service

The Block Storage service (cinder) provides an optional backup service that you can deploy in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.

Users can use the Block Storage backup service to create and restore full or incremental backups of their Block Storage volumes.

A volume backup is a persistent copy of the contents of a Block Storage volume that is saved to a backup repository.

You can configure the backup service under the cinderBackup section of the glance template in your OpenStackControlPlane CR.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have enabled the backup service for the Block Storage service in your OpenStack Control Plane.

5.1. Storage back ends for backups

You can use the following storage back ends for Block Storage backups:

For information about other back end options for backups, see OSP18 Cinder Alternative Storage.

You can use the backup service to back up volumes that are on any back end that the Block Storage service (cinder) supports, regardless of which back end you choose to use for backups. You can only configure one back end for backups, whereas you can configure multiple back ends for volumes.

Back ends for backups do not have transport protocol requirements for the RHOCP node. However, the backup pods need to connect to the volumes, and the back ends for volumes have transport protocol requirements.

5.2. Setting the number of replicas for backups

You can run multiple instances of the Block Storage backup component in active-active mode by setting replicas to a value greater than 1. The default value is 0.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to set the number of replicas for the cinderBackup parameter:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
       …
       cinder:
          template:
            cinderBackup: |
              replicas: <number_of_replicas>
    ...
    Copy to Clipboard
    • Replace <number_of_replicas> with a value greater than 1.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.3. Backup performance considerations

Some features of the Block Storage backup service like incremental backups, the creation of backups from snapshots, and data compression can reduce the performance of backup operations.

By only capturing the periodic changes to volumes, incremental backup operations can minimize resource usage. However, incremental backup operations have a lower performance than full backup operations. When you create an incremental backup, all of the data in the volume must first be read and compared with the data in both the full backup and each subsequent incremental backup.

Some back ends for volumes support the creation of a backup from a snapshot by directly attaching the snapshot to the backup host, which is faster than cloning the snapshot into a volume. If the back end you use for volumes does not support this feature, you can create a volume from a snapshot and use the volume as backup. However, the extra step of creating the volume from a snapshot can affect the performance of the backup operation.

You can configure the Block Storage backup service to enable or disable data compression of the storage back end for your backups. If you enable data compression, backup operations require additional CPU power, but they use less network bandwidth and storage space overall.

Note

You cannot use data compression with a Red Hat Ceph Storage back end.

5.4. Setting options for backups

The cinderBackup parameter inherits the configuration from the top level customServiceConfig section of the cinder template in your OpenStackControlPlane CR. However, the cinderBackup parameter also has its own customServiceConfig section.

The following table describes configuration options that apply to all back-end drivers.

Table 5.1. Configuration options for backup drivers
OptionDescriptionValue typeDefault value

debug

When set to true, the logging level is set to DEBUG instead of the default INFO level. You can also set debug log levels for the scheduler dynamically without a restart by using the dynamic log level API functionality.

Boolean

false

backup_service_inithost_offload

Offload pending backup delete during backup service startup. If set to false, the backup service remains down until all pending backups are deleted.

Boolean

true

storage_availability_zone

Availability zone of the backup service.

String

nova

backup_workers

Number of processes to launch in the backup pod. Improves performance with concurrent backups.

Integer

1

backup_max_operations

Maximum number of concurrent memory, and possibly CPU, heavy operations (backup and restore) that can be executed on each pod. The number limits all workers within a pod but not across pods. Value of 0 means unlimited.

Integer

15

backup_native_threads_pool_size

Size of the native threads pool used for backup data-related operations. Most backup drivers rely heavily on this option, and you can increase the value for specific drivers that do not rely on it.

Integer

60

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to set configuration options. In this example, you enable debug logs, double the number of processes, and increase the maximum number of operations per pod to 20.

    Example:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
       …
       cinder:
          template:
            customServiceConfig: |
              [DEFAULT]
              debug = true
            cinderBackup:
              customServiceConfig: |
               [DEFAULT]
               backup_workers = 2
               backup_max_operations = 20
    
    ...
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.5. Enabling data compression

Backups are compressed by default with the zlib compression algorithm.

Data compression requires additional CPU power but uses less network bandwidth and storage space.

You can change the data compression algorithm of your backups or disable data compression by using the backup_compression_algorithm parameter in your OpenStackControlPlane CR.

The following options are available for data compression.

Table 5.2. Data compression options

Option

Description

none, off, or no

Do not use compression.

zlib or gzip

Use the Deflate compression algorithm.

bz2z or bzip2

Use Burrows-Wheeler transform compression.

zstd

Use the Zstandard compression algorithm.

Note

You cannot specify the data compression algorithm for the Red Hat Ceph Storage back end driver.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameter to the cinder template to enable data compression. In this example, you enable data compression with an Object Storage service (swift) back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver
              backup_compression_algorithm = zstd
            networkAttachments:
            - storage
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.6. Configuring a Ceph RBD back end for Block Storage backups

You can configure the Block Storage service (cinder) backup service with Red Hat Ceph Storage RADOS Block Device (RBD) as the storage back end.

Note

If you use Ceph RBD as the back end for backups together with Ceph RBD volumes, the performance for incremental backups is efficient.

For more information about Ceph RBD, see Configuring the control plane to use the Red Hat Ceph Storage cluster.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to configure Ceph RBD as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.CephBackupDriver
              backup_ceph_pool = backups
              backup_ceph_user = openstack
            networkAttachments:
            - storage
            replicas: 1
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.7. Configuring an Object Storage service (swift) back end for backups

You can configure the Block Storage service (cinder) backup service with the Object Storage service (swift) as the storage back end.

Prerequisites

  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
  • Verify that the Object Storage service is active in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

The default container for Object Storage service back ends is volumebackups. You can change the default container by using the backup_swift_container configuration option.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to configure the Object Storage service as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver
            networkAttachments:
            - storage
            replicas: 1
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.8. Configuring an NFS back end for backups

You can configure the Block Storage service (cinder) backup service with NFS as the storage back end.

Prerequisites

Procedure

  1. Create a secret CR file, for example, cinder-backup-nfs-secrets.yaml, and add the following configuration for your NFS share:

    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        service: cinder
        component: cinder-backup
      name: cinder-backup-nfs-secrets
    type: Opaque
    stringData:
      nfs-secrets.conf: |
        [DEFAULT]
        backup_share = <192.168.1.2:/Backups>
        backup_mount_options = <optional>
    Copy to Clipboard
    • Replace <192.168.1.2:/Backups> with the IP address of your NFS share.
    • Replace <optional> with the mount options for your NFS share.
  2. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to add the secret for the NFS share and configure NFS as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.NFSBackupDriver
            customServiceConfigSecrets:
            - cinder-backup-nfs-secrets
            networkAttachments:
            - storage
            replicas: 1
    Copy to Clipboard
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.9. Configuring an S3 back end for backups

You can configure the Block Storage service (cinder) backup service with S3 as the storage back end.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to configure S3 as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.s3.S3BackupDriver
              backup_s3_endpoint_url = <user supplied>
              backup_s3_store_access_key = <user supplied>
              backup_s3_store_secret_key = <user supplied>
              backup_s3_store_bucket = volumebackups
              backup_s3_ca_cert_file = /etc/pki/tls/certs/ca-bundle.crt
            networkAttachments:
            - storage
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.10. Block Storage volume backup metadata

When you create a backup of a Block Storage volume, the metadata for this backup is stored in the Block Storage service database. The Block Storage backup service uses this metadata when it restores the volume from the backup.

Important

To ensure that a backup survives a catastrophic loss of the Block Storage service database, you can manually export and store the metadata of this backup. After a catastrophic database loss, you need to create a new Block Storage database and then manually re-import this backup metadata into it.

Chapter 6. Configuring the Image service (glance)

The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or store a snapshot of a server image. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.

You can configure the following back ends (stores) for the Image service:

  • RADOS Block Device (RBD) is the default back end when you use Red Hat Ceph Storage. For more information, see Configuring the control plane to use the Red Hat Ceph Storage cluster.
  • Block Storage (cinder).
  • Object Storage (swift).
  • S3.
  • NFS.
  • RBD multistore. You can use multiple stores with distributed edge architecture so that you can have an image pool at every edge site.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

6.1. Configuring a Block Storage back end for the Image service

You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end.

Prerequisites

  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
  • Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API (glanceAPI) is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure the Block Storage service as the back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = default_backend:cinder
            [glance_store]
            default_backend = default_backend
            [default_backend]
            description = Default cinder backend
            cinder_store_auth_address = {{ .KeystoneInternalURL }}
            cinder_store_user_name = {{ .ServiceUser }}
            cinder_store_password = {{ .ServicePassword }}
            cinder_store_project_name = service
            cinder_catalog_info = volumev3::internalURL
            cinder_use_multipath = true
    ...
    Copy to Clipboard
    • Set replicas to 3 for high availability across APIs.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.1.1. Enabling the creation of multiple instances or volumes from a volume-backed image

When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.

When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.

Note

By default, only the Block Storage project administrator can create volume types.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard
  2. Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:

    $ openstack volume type create glance-multiattach
    $ openstack volume type set --property multiattach="<is> True"  glance-multiattach
    Copy to Clipboard

    If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the volume_backend_name property to this volume type. You might need to ask your Block Storage administrator for the correct volume_backend_name for your multi-attach volume type. For this example, we are using iscsi as the back-end name.

    $ openstack volume type set glance-multiattach --property volume_backend_name=iscsi
    Copy to Clipboard
  3. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard
  4. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml. In the glance template, add the following parameter to the end of the customServiceConfig, [default_backend] section to configure the Image service to use the Block Storage multi-attach volume type:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          ...
          customServiceConfig: |
          ...
          [default_backend]
          ...
            cinder_volume_type = glance-multiattach
    ...
    Copy to Clipboard
  5. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  6. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.1.2. Parameters for configuring the Block Storage back end

You can add the following parameters to the end of the customServiceConfig, [default_backend] section of the glance template in your OpenStackControlPlane CR file.

Table 6.1. Block Storage back-end parameters for the Image service
Parameter = Default valueTypeDescription of use

cinder_use_multipath = False

boolean value

Set to True when multipath is supported for your deployment.

cinder_enforce_multipath = False

boolean value

Set to True to abort the attachment of volumes for image transfer when multipath is not running.

cinder_mount_point_base = /var/lib/glance/mnt

string value

Specify a string representing the absolute path of the mount point, the directory where the Image service mounts the NFS share.

Note

This parameter is only applicable when using an NFS Block Storage back end for the Image service.

cinder_do_extend_attached = False

boolean value

Set to True when the images are > 1 GB to optimize the Block Storage process of creating the required volume sizes for each image.

The Block Storage service creates an initial 1 GB volume and extends the volume size in 1 GB increments until it contains the data of the entire image. When this parameter is either not added or set to False, the incremental process of extending the volume is very time-consuming, requiring the volume to be subsequently detached, extended by 1 GB if it is still smaller than the image size and then reattached. By setting this parameter to True, this process is optimized by performing these consecutive 1 GB volume extensions while the volume is attached.

Note

This parameter requires your Block Storage back end to support the extension of attached (in-use) volumes. See your back-end driver documentation for information on which features are supported.

cinder_volume_type = __DEFAULT__

string value

Specify the name of the Block Storage volume type that can be optimized for creating volumes for images. For example, you can create a volume type that enables the creation of multiple instances or volumes from a volume-backed image. For more information, see Creating a multi-attach volume type.

When this parameter is not used, volumes are created by using the default Block Storage volume type.

6.2. Configuring an Object Storage back end

You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure the Object Storage service as the back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = default_backend:swift
            [glance_store]
            default_backend = default_backend
            [default_backend]
            swift_store_create_container_on_put = True
            swift_store_auth_version = 3
            swift_store_auth_address = {{ .KeystoneInternalURL }}
            swift_store_key = {{ .ServicePassword }}
            swift_store_user = service:glance
            swift_store_endpoint_type = internalURL
    ...
    Copy to Clipboard
    • Set replicas to 3 for high availability across APIs.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.3. Configuring an S3 back end

To configure the Image service (glance) with S3 as the storage back end, you require the following details:

  • S3 access key
  • S3 secret key
  • S3 endpoint

For security, these details are stored in a Kubernetes secret.

Prerequisites

Procedure

  1. Create a configuration file, for example, glance-s3.conf, where you can store the S3 configuration details.
  2. Generate the secret and access keys for your S3 storage.

    • If your S3 storage is provisioned by the Ceph Object Gateway (RGW), run the following command to generate the secret and access keys:

      $ radosgw-admin user create --uid="<user_1>" \
      --display-name="<Jane Doe>"
      Copy to Clipboard
      • Replace <user_1> with the user ID.
      • Replace <Jane Doe> with a display name for the user.
    • If your S3 storage is provisioned by the Object Storage service (swift), run the following command to generate the secret and access keys:

      $ openstackclient openstack credential create --type ec2 \
      --project admin admin \
      '{"access": "<access_key>", "secret": "<secret_key>"}'
      Copy to Clipboard
      • Replace <access_key> with the user ID.
      • Replace <secret_key> with a display name for the user.
  3. Add the S3 configuration details to your glance-s3.conf configuration file:

    [default_backend]
    s3_store_host = <_s3_endpoint_>
    s3_store_access_key = <_s3_access_key_>
    s3_store_secret_key = <_s3_secret_key_>
    s3_store_bucket = <_s3_bucket_>
    Copy to Clipboard
    • Replace <_s3_endpoint_> with the host where the S3 server is listening. This option can contain a DNS name, for example, _s3.amazonaws.com_, or an IP address.
    • Replace <_s3_access_key_> and <_s3_secret_key_> with the data generated by the S3 back end.
    • Replace <_s3_bucket_> with the bucket name where you want to store images in the S3 back end. If you set s3_store_create_bucket_on_put to True in your OpenStackControlPlane CR file, the bucket name is created automatically, even if the bucket does not already exist.
  4. Create a secret from the glance-s3.conf file:

    $ oc create secret generic glances3  \
    --from-file s3glance.conf
    Copy to Clipboard
  5. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure S3 as the back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = default_backend:s3
            [glance_store]
            default_backend = default_backend
            [default_backend]
            s3_store_create_bucket_on_put = True
            s3_store_bucket_url_format = "path"
            s3_store_cacert = "/etc/pki/tls/certs/ca-bundle.crt 
    1
    
          glanceAPIs:
            default:
              customServiceConfigSecrets:
              - glances3
    ...
    Copy to Clipboard
    1
    Optional: If your S3 storage is accessed by HTTPS, you must set the s3_store_cacert field and point it to the ca-bundle.crt path. The OpenStack control plane is deployed by default with TLS enabled, and a CA certificate is mounted to the pod in /etc/pki/tls/certs/ca-bundle.crt.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.4. Configuring an NFS back end

You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share.

If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk:

  • Use a reliable production-grade NFS back end.
  • Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a NetworkAttachmentDefinition custom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server.
  • Set export permissions. Write permissions must be present in the shared file system that you use as a store.

Limitations

  • In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways:

    • Set server-side mount options.
    • Use /etc/nfsmount.conf.
    • Mount NFS volumes by using PersistentVolumes, which have mount options.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the extraMounts parameter in the spec section to add the export path and IP address of the NFS share. The path is mapped to /var/lib/glance/images, where the Image service API (glanceAPI) stores and retrieves images:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    ...
    spec:
      extraMounts:
      - extraVol:
        - extraVolType: Nfs
          mounts:
          - mountPath: /var/lib/glance/images
            name: nfs
          propagation:
          - Glance
          volumes:
          - name: nfs
            nfs:
              path: <nfs_export_path>
              server: <nfs_ip_address>
        name: r1
        region: r1
    ...
    Copy to Clipboard
    • Replace <nfs_export_path> with the export path of your NFS share.
    • Replace <nfs_ip_address> with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service.
  2. Add the following parameters to the glance template to configure NFS as the back end:

    ...
    spec:
      extraMounts:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              type: single
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = default_backend:file
            [glance_store]
            default_backend = default_backend
            [default_backend]
            filesystem_store_datadir = /var/lib/glance/images
          databaseInstance: openstack
    ...
    Copy to Clipboard
    • Set replicas to 3 for high availability across APIs.

      Note

      When you configure an NFS back end, you must set the type to single. By default, the Image service has a split deployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. The split deployment type is invalid for a file back end because different pods access the same file share.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.5. Configuring multistore for a single Image service API instance

You can configure the Image service (glance) with multiple storage back ends.

To configure multiple back ends for a single Image service API (glanceAPI) instance, you set the enabled_backends parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid:

  • file
  • http
  • rbd
  • swift
  • cinder
  • s3

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the parameters to the glance template to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store:

    ...
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug=True
            enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift
    ...
    Copy to Clipboard
  2. Specify the back end to use as the default back end. In the following example, the default back end is ceph-1:

    ...
          customServiceConfig: |
            [DEFAULT]
            debug=True
            enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift
            [glance_store]
            default_backend = ceph-1
    ...
    Copy to Clipboard
  3. Add the configuration for each back end type you want to use:

    • Add the configuration for the first Ceph RBD store, ceph-0:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              rbd_store_ceph_conf = /etc/ceph/ceph-0.conf
              store_description = "RBD backend"
              rbd_store_pool = images
              rbd_store_user = openstack
      ...
      Copy to Clipboard
    • Add the configuration for the second Ceph RBD store, ceph-1:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              ...
              [ceph-1]
              rbd_store_ceph_conf = /etc/ceph/ceph-1.conf
              store_description = "RBD backend 1"
              rbd_store_pool = images
              rbd_store_user = openstack
      ...
      Copy to Clipboard
    • Add the configuration for the Object Storage service store, swift-0:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              ...
              [ceph-1]
              ...
              [swift-0]
              swift_store_create_container_on_put = True
              swift_store_auth_version = 3
              swift_store_auth_address = {{ .KeystoneInternalURL }}
              swift_store_key = {{ .ServicePassword }}
              swift_store_user = service:glance
              swift_store_endpoint_type = internalURL
      ...
      Copy to Clipboard
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.6. Configuring multiple Image service API instances

You can deploy multiple Image service API (glanceAPI) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI instances, they are orchestrated by the same glance-operator, but you can connect them to a single back end or to different back ends.

Multiple glanceAPI instances inherit the same configuration from the main customServiceConfig parameter in your OpenStackControlPlane CR file. You use the extraMounts parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters.

You can also deploy multiple glanceAPI instances in an availability zone (AZ) to serve different workloads in that AZ.

Note

You can only register one glanceAPI instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint parameter in your OpenStackControlPlane CR file.

For information about adding and decommissioning glanceAPIs, see Performing operations with the Image service (glance).

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the glanceAPIs parameter to the glance template to configure multiple glanceAPI instances. In the following example, you create three glanceAPI instances that are named api0, api1, and api2:

    ...
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = default_backend:rbd
            [glance_store]
            default_backend = default_backend
            [default_backend]
            rbd_store_ceph_conf = /etc/ceph/ceph.conf
            store_description = "RBD backend"
            rbd_store_pool = images
            rbd_store_user = openstack
          databaseInstance: openstack
          databaseUser: glance
          keystoneEndpoint: api0
          glanceAPIs:
            api0:
              replicas: 1
            api1:
              replicas: 1
            api2:
              replicas: 1
        ...
    Copy to Clipboard
    • api0 is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations.
    • api1 and api2 are not default endpoints, but they are active APIs that users can use for image uploads by specifying the --os-image-url parameter when they upload an image.
    • You can update the keystoneEndpoint parameter to change the default endpoint in the Keystone catalog.
  2. Add the extraMounts parameter to connect the three glanceAPI instances to a different back end. In the following example, you connect api0, api1, and api2 to three different Ceph Storage clusters that are named ceph0, ceph1, and ceph2:

    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            ...
          extraMounts:
            - name: api0
              region: r1
              extraVol:
                - propagation:
                  - api0
                  volumes:
                  - name: ceph0
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph0
                    mountPath: "/etc/ceph"
                    readOnly: true
            - name: api1
              region: r1
              extraVol:
                - propagation:
                  - api1
                  volumes:
                  - name: ceph1
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph1
                    mountPath: "/etc/ceph"
                    readOnly: true
            - name: api2
              region: r1
              extraVol:
                - propagation:
                  - api2
                  volumes:
                  - name: ceph2
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph2
                    mountPath: "/etc/ceph"
                    readOnly: true
    ...
    Copy to Clipboard
    • Replace <secret_name> with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specific glanceAPI, for example, ceph-conf-files-0 for the ceph0 cluster.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.7. Split and single Image service API layouts

By default, the Image service (glance) has a split deployment type:

  • An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone)
  • An internal API service, which is accessible only through the internal endpoint for the Identity service

The split deployment type is invalid for an NFS or file back end because different pods access the same file share. When you configure an NFS or file back end, you must set the type to single in your OpenStackControlPlane CR.

Split layout example

In the following example of a split layout type in an edge deployment, two glanceAPI instances are deployed in an availability zone (AZ) to serve different workloads in that AZ.

...
spec:
  glance:
    template:
      customServiceConfig: |
        [DEFAULT]
...
  keystoneEndpoint: api0
  glanceAPIs:
    api0:
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = default_backend:rbd
      replicas: 1
      type: split
    api1:
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = default_backend:swift
      replicas: 1
      type: split
    ...
Copy to Clipboard
Single layout example

In the following example of a single layout type in an NFS back-end configuration, different pods access the same file share:

...
spec:
  extraMounts:
    ...
    glance:
    template:
      glanceAPIs:
        default:
          type: single
          replicas: 3 # Configure back end; set to 3 when deploying service
      ...
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = default_backend:file
        [glance_store]
        default_backend = default_backend
        [default_backend]
        filesystem_store_datadir = /var/lib/glance/images
      databaseInstance: openstack
      glanceAPIs:
...
Copy to Clipboard
  • Set replicas to 3 for high availability across APIs.

6.8. Configuring multistore with edge architecture

When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites.

The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores.

With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Red Hat Ceph Storage Block Device Guide.

When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch.

Refer to the following requirements to use images with edge sites:

  • A copy of each image must exist in the Image service at the central location.
  • You must copy images from an edge site to the central location before you can copy them to other edge sites.
  • You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage.
  • RBD must be the storage driver for the Image, Compute, and Block Storage services.

For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture.

Chapter 7. Configuring the Object Storage service (swift)

You can configure the Object Storage service (swift) to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.

When you use PVs on OpenShift nodes, this configuration is limited to a single PV per node. The Object Storage service requires multiple PVs. To maximize availability and data durability, you create these PVs on different nodes, and only use one PV per node.

You can use external data plane nodes for more flexibility in larger storage deployments, where you can use multiple disks per node to deploy a larger Object Storage cluster.

For information about configuring the Object Storage service as an endpoint for the Red Hat Ceph Storage Object Gateway (RGW), see Configuring an external Ceph Object Gateway back end.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

7.1. Deploying the Object Storage service on OpenShift nodes by using PersistentVolumes

You use at least two swiftProxy replicas and three swiftStorage replicas in a default Object Storage service (swift) deployment. You can increase these values to distribute storage across more nodes and disks.

The ringReplicas value defines the number of object copies in the cluster. For example, if you set ringReplicas: 3 and swiftStorage/replicas: 5, every object is stored on 3 different PersistentVolumes (PVs), and there are 5 PVs in total.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            ringReplicas: 3
          swiftStorage:
            replicas: 3
            storageClass: <swift-storage>
            storageRequest: 100Gi
    ...
    Copy to Clipboard
    • Increase the swiftProxy/replicas: value to distribute proxy instances across more nodes.
    • Replace the ringReplicas: value to define the number of object copies you want in your cluster.
    • Increase the swiftStorage/replicas: value to define the number of PVs in your cluster.
    • Replace <swift-storage> with the name of the storage class you want the Object Storage service to use.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

7.2. Object Storage rings

The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.

Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.

The Object Storage service has three rings to store the following types of data:

  • Account information
  • Containers, to facilitate organizing objects under an account
  • Object replicas

7.3. Ring partition power

The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.

In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat Support.

When you use separate nodes for the Object Storage service (swift), use a higher partition power value.

The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.

7.4. Increasing ring partition power

You can only change the partition power parameter for new containers and their objects, so you must set this value before initial deployment.

The default partition power value is 10. Refer to the following table to select an appropriate partition power if you use three replicas:

Table 7.1. Appropriate partition power values per number of available disks

Partition Power

Maximum number of disks

10

~ 35

11

~ 75

12

~ 150

13

~ 250

14

~ 500

Important

Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and change the value for partPower under the swiftRing parameter in the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            partPower: 12
            ringReplicas: 3
    ...
    Copy to Clipboard
    • Replace <12> with the value you want to set for partition power.

      Tip

      You can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.

Chapter 8. Configuring the Shared File Systems service (manila)

When you deploy the Shared File Systems service (manila), you can choose one or more supported back ends, such as native CephFS, CephFS-NFS, NetApp, and others.

For a complete list of supported back-end appliances and drivers, see the Manila section of the Red Hat Knowledge Article, Component, Plug-In, and Driver Support in Red Hat OpenStack Platform.

Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have planned networking for the Shared File Systems service. For more information, see Planning networking for the Shared File Systems service in Planning your deployment.
  • You have enabled the Shared File Systems service. For more information, see Enabling the Shared File Systems service.
  • For native CephFS or CephFS-NFS:

  • For CephFS-NFS:

    • A ceph nfs service must exist in the Ceph Storage cluster. For more information, see Integrating Red Hat Ceph Storage.
    • You have created an isolated StorageNFS network for NFS exports and a corresponding StorageNFS shared provider network in the Networking service (neutron). The StorageNFS shared provider network maps to the isolated StorageNFS network of the data center.
    • The NFS service is isolated on a network that you can share with all Red Hat OpenStack Services on OpenShift (RHOSO) users. For more information about customizing the NFS service, see NFS cluster and export management in the Red Hat Ceph Storage File System Guide.

      Important

      When you deploy an NFS service for the Shared File Systems service, do not select a custom port to expose NFS. Only the default NFS port of 2049 is supported. You must enable the Red Hat Ceph Storage ingress service and set the ingress-mode to haproxy-protocol. Otherwise, you cannot use IP-based access rules with the Shared File Systems service. For security in production environments, do not provide access to 0.0.0.0/0 on shares to mount them on client machines.

8.1. Enabling the Shared File Systems service

You can enable the Shared File Systems service (manila) to provision remote, shareable file systems in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. These file systems are known as shares, and they allow projects in the cloud to share POSIX compliant storage. Shares can be mounted to multiple compute instances, baremetal computes, containers or pods of containers at the same time with read/write access mode.

When you enable the Shared File Systems service, you can configure the service with the following back ends:

  • Red Hat Ceph Storage CephFS
  • Red Hat Ceph Storage CephFS-NFS
  • NFS or CIFS through third party vendor storage systems

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the spec section to enable the Shared File Systems service:

    spec:
      ...
      manila:
        enabled: true
        apiOverride:
          route: {}
        template:
          databaseInstance: openstack
          secret: osp-secret
          manilaAPI:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          manilaScheduler:
            replicas: 3
          manilaShares:
            share1:
              networkAttachments:
              - storage
              replicas: 0 # backend needs to be configured
    Copy to Clipboard
    Note

    You must configure a back end for the Shared File Systems service. If you do not configure a back end for the Shared File Systems service, then the service is deployed but not activated (replicas: 0).

  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.2. Configuring a native CephFS back end

You can configure the Shared File Systems service (manila) with native CephFS as the storage back end.

Limitations

You can expose a native CephFS back end to trusted users, but take the following security measures:

  • Configure the storage network as a provider network.
  • Apply role-based access control (RBAC) policies to secure the storage provider network.
  • Create a private share type.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the extraMounts parameter in the spec section to present the Ceph configuration files:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - ManilaShare
              extraVolType: Ceph
              volumes:
              - name: ceph
                projected:
                  sources:
                  - secret:
                      name: <ceph-conf-files>
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
    Copy to Clipboard
  2. Add the following parameters to the manila template to configure the native CephFS back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=cephfs
          manilaScheduler:
            replicas: 3
          manilaShares:
            cephfsnative:
              replicas: 1
              networkAttachments:
              - storage
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfs
                [cephfs]
                driver_handles_share_servers=False
                share_backend_name=cephfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_conf_path=/etc/ceph/ceph.conf
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_volume_mode=0755
                cephfs_protocol_helper_type=CEPHFS
    ...
    Copy to Clipboard
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.3. Configuring a CephFS-NFS back end

You can configure the Shared File Systems service (manila) with CephFS-NFS as the storage back end.

Limitations

  • Use NFSv4.1 or later for Linux clients. NFSv3 is available for Microsoft Windows clients, but recovery is not expected for NFSv3 clients when a CephFS-NFS service fails over. Simultaneous access from Windows and Linux clients is not supported.

Prerequisites

  • The isolated storage network is configured on the share manager pod on OpenShift so that the Shared File Systems service can communicate with the Red Hat Ceph Storage cluster.
  • Use an isolated NFS network for NFS traffic. This network does not need to be available to the share manager pod for the Shared File Systems service on OpenShift, but it must be available to Compute instances owned by end users.
  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=nfs
          manilaScheduler:
            replicas: 3
          manilaShares:
            share1:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfsnfs
                [cephfsnfs]
                driver_handles_share_servers=False
                share_backend_name=cephfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_nfs_cluster_id=cephfs
                cephfs_protocol_helper_type=NFS
              networkAttachments:
              - storage
    ...
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.4. Configuring alternative back ends

To configure the Shared File Systems service (manila) with an alternative back end, for example, NetApp or Pure Storage, complete the following high level tasks:

  1. Create the server connection secret.
  2. Configure the OpenStackControlPlane CR to use the alternative storage system as the back end for the Shared File Systems service.

Prerequisites

  • You have prepared the alternative storage system for consumption by Red Hat OpenStack Services on OpenShift (RHOSO).
  • Network connectivity between the Red Hat OpenShift cluster, the Compute nodes, and the alternative storage system.

8.4.1. Creating the server connection secret

Create a server connection secret for an alternative back end to prevent placing server connection information directly in the OpenStackControlPlane CR.

Procedure

  1. Create a configuration file that contains the server connection information for your alternative back end. In this example, you are creating the secret for a NetApp back end.

    The following is an example of the contents of a configuration file:

    [netapp]
    netapp_server_hostname = <netapp_ip>
    netapp_login = <netapp_user>
    netapp_password = <netapp_password>
    netapp_vserver = <netappvserver>
    Copy to Clipboard
    • Replace <netapp_ip> with the IP address of the server.
    • Replace <netapp_user> with the login user name.
    • Replace <netapp_password> with the login password.
    • Replace <netappvserver> with the vserver name. You do not need this option if configuring the driver_handles_share_servers=True mode.
  2. Save the configuration file.
  3. Create the secret based on the configuration file:

    $ oc create secret generic <secret_name> --from-file=<configuration_file_name>

    • Replace <secret_name> with the name you want to assign to the secret.
    • Replace <configuration_file_name> with the name of the configuration file you created.
  4. Delete the configuration file.

8.4.2. Configuring an alternative back end

You can configure the Shared File Systems service (manila) with an alternative storage back end, for example, a NetApp back end.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=cifs
          manilaScheduler:
            replicas: 3
          manilaShares:
            share1:
              networkAttachments:
              - storage
              customServiceConfigSecrets:
              - manila_netapp_secret
              customServiceConfig: |
                [DEFAULT]
                debug = true
                enabled_share_backends=netapp
                [netapp]
                driver_handles_share_servers=False
                share_backend_name=netapp
                share_driver=manila.share.drivers.netapp.common.NetAppDriver
                netapp_storage_family=ontap_cluster
    ...
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.4.3. Custom configuration files

When you configure an alternative back end for the Shared File Systems service (manila), you might need to use additional configuration files. You can use the extraMounts parameter in your OpenStackControlPlane CR file to present these configuration files as OpenShift configMap or secret objects in the relevant share manager pod.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
...
  extraMounts:
    - name: v1
      region: r1
      extraVol:
        - propagation:
          - sharepod1
          extraVolType: Undefined
          volumes:
          - name: backendconfig
            projected:
              sources:
              - secret:
                  name: manila-sharepod1-secrets
          mounts:
          - name: backendconfig
            mountPath: /etc/manila/drivers
            readOnly: true
...
Copy to Clipboard

8.4.4. Custom storage driver images

When you configure an alternative back end for the Shared File Systems service (manila), you might need to use a custom manilaShares container image from the vendor on the Red Hat Ecosystem Catalog.

You can add the path to the container image to your OpenStackVersion CR file with the customContainerImages parameter.

For more information, see Deploying partner container images in Integrating partner content.

8.5. Configuring multiple back ends

You can deploy multiple back ends for the Shared File Systems service (manila), for example, a CephFS-NFS back end, a native CephFS back end, and a third-party back end. Add one back end only per pod.

Prerequisites

  • When you use a back-end driver from a storage vendor that requires external software components, you must override the standard container image for the Shared File Systems service during deployment. You can find custom container images, for example, the Dell EMC Unity container image for a Dell EMC Unity storage system, at Red Hat Ecosystem Catalog.
  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template to configure the back ends. In this example, there is a CephFS-NFS back end, a native CephFS back end, and a Pure Storage back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=nfs,cephfs,cifs
          manilaScheduler:
            replicas: 3
        ...
    Copy to Clipboard
  2. Add the configuration for each back end you want to use:

    • Add the configuration for the CephFS-NFS back end:

          ...
              customServiceConfig: |
              ...
              manilaShares:
                cephfsnfs:
                  networkAttachments:
                  - storage
                  customServiceConfig: |
                      [DEFAULT]
                      enabled_share_backends=cephfsnfs
                      [cephfsnfs]
                      driver_handles_share_servers=False
                      share_backend_name=cephfs
                      share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                      cephfs_auth_id=openstack
                      cephfs_cluster_name=ceph
                      cephfs_nfs_cluster_id=cephfs
                      cephfs_protocol_helper_type=NFS
                  replicas: 1
          ...
      Copy to Clipboard
    • Add the configuration for the native CephFS back end:

          ...
              customServiceConfig: |
              ...
              manilaShares:
                cephfsnfs:
                ...
                cephfs:
                  networkAttachments:
                  - storage
                  customServiceConfig: |
                    [DEFAULT]
                    enabled_share_backends=cephfs
                    [cephfs]
                    driver_handles_share_servers=False
                    share_backend_name=cephfs
                    share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                    cephfs_conf_path=/etc/ceph/ceph.conf
                    cephfs_auth_id=openstack
                    cephfs_protocol_helper_type=CEPHFS
                  replicas: 1
          ...
      Copy to Clipboard
    • Add the configuration for the Pure Storage back end:

          ...
              customContainerImages:
                manilaShareImages:
                  pure: registry.connect.redhat.com/purestorage/openstack-manila-share-pure:rhoso18
              manilaShares:
                cephfsnfs:
                ...
                cephfs:
                ...
                pure:
                  networkAttachments:
                  - storage
                  customServiceConfigSecret: |
                  - manila-pure-secret
                  customServiceConfig: |
                    [DEFAULT]
                    debug = true
                    enabled_share_backends=pure
                    [pure]
                    driver_handles_share_servers=False
                    share_backend_name=pure
                    share_driver=manila.share.drivers.purestorage.flashblade.FlashBladeShareDriver
          ...
      Copy to Clipboard
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.6. Verifying the deployment of the Shared File Systems service

After a deployment or when troubleshooting issues, verify that the services for the Shared File Systems service (manila) are running and that they are up.

Verify that the manila pods are running. The number of pods depends on the number of replicas you have configured for the different components of the Shared File Systems service.

When you have verified that the pods are running, you can use the Shared File Systems service API to check the status of the services.

Procedure

  1. List the manila pods to verify that they are running:

    $ oc -n openstack get pod -l service=manila
    Copy to Clipboard

    Example output:

    NAME                             READY   STATUS      RESTARTS          AGE
    manila-api-0                     2/2     Running     0                 43h
    manila-api-1                     2/2     Running     0                 43h
    manila-api-2                     2/2     Running     0                 43h
    manila-db-purge-28696321-tkl9g   0/1     Completed   0                 41h
    manila-db-purge-28697761-zxxzc   0/1     Completed   0                 17h
    manila-scheduler-0               2/2     Running     0                 43h
    manila-scheduler-1               2/2     Running     0                 43h
    manila-scheduler-2               2/2     Running     0                 43h
    manila-share-share1-0            2/2     Running     0                 43h
    Copy to Clipboard
  2. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard
  3. Run the openstack share service list command:

    $ openstack share service list
    Copy to Clipboard

    Example output: ----------------------------------------------------------------------------------------------- | ID | Binary | Host | Zone | Status | State | Updated At | ----------------------------------------------------------------------------------------------- | 1 | manila-scheduler | hostgroup | nova | enabled | up | 2024-07-25T17:40:27.323342 | | 4 | manila-share | hostgroup@cephfsnfs | nova | enabled | up | 2024-07-25T17:40:49.115386 | -----------------------------------------------------------------------------------------------

  4. Verify that the Status entry of every service is up. If not, examine the relevant log files.
  5. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard

8.7. Verifying the deployment of multiple back ends

Use the openstack share service list command to verify that the storage back ends for the Shared File Systems service (manila) deployed successfully. If you use a health check on multiple back ends, a ping test returns a response even if one of the back ends is unresponsive, so this is not a reliable way to verify your deployment.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard
  2. Confirm the list of Shared File Systems service back ends:

    $ openstack share service list
    Copy to Clipboard

    The status of each successfully deployed back end shows as enabled and the state shows as up.

  3. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard

8.8. Creating availability zones for back ends

You can create availability zones (AZs) for Shared File Systems service back ends to group cloud infrastructure and services logically for users. Map the AZs to failure domains and compute resources for high availability, fault tolerance, and resource scheduling. For example, you can create an AZ of Compute nodes that have specific hardware that users can specify when they create an instance that requires that hardware.

Post deployment, use the availability_zones share type extra specification to limit share types to one or more AZs. Users can create a share directly in an AZ as long as the share type does not restrict them.

Procedure

The following example deploys two back ends where CephFS is zone 1 and NetApp is zone 2.

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaShares:
            cephfs:
              customServiceConfig: |
                [cephfs]
                backend_availability_zone = zone_1
              ...
            netapp:
              customServiceConfig: |
                [netapp]
                backend_availability_zone = zone_2
              ...
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.9. Changing the allowed NAS protocols

You can use the Shared File Systems service (manila) to export shares in the NFS, CephFS, or CIFS network attached storage (NAS) protocols. By default, the Shared File Systems service enables NFS and CIFS, and this may not be supported by the back ends in your deployment.

You can change the enabled_share_protocols parameter and list only the protocols that you want to allow in your cloud. For example, if back ends in your deployment support both NFS and CIFS, you can change the default value and enable only one protocol. The NAS protocols that you assign must be supported by the back ends in your Shared File Systems service deployment.

Not all storage back-end drivers support the CIFS protocol. For information about which certified storage systems support CIFS, see the Red Hat Ecosystem Catalog.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template. In this example, you enable the NFS protocol:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
          manila:
            enabled: true
            template:
              manilaAPI:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_protocols = NFS
              ...
    Copy to Clipboard
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.10. Viewing back-end storage capacity

The scheduler component of the Shared File Systems service (manila) makes intelligent placement decisions based on several factors such as capacity, provisioning configuration, placement hints, and the capabilities that the back-end storage system driver detects and exposes. You can use share types and extra specifications to modify placement decisions.

Procedure

  1. Access the remote shell for the OpenStackClient pod from your workstation:

    $ oc rsh -n openstack openstackclient
    Copy to Clipboard
  2. Run the following command to view the available back-end storage capacity:

    $ openstack share pool list --detail
    Copy to Clipboard
  3. Exit the openstackclient pod:

    $ exit
    Copy to Clipboard

8.11. Configuring automatic database cleanup

The Shared File Systems (manila) service automatically purges database entries marked for deletion for a set number of days. By default, records are marked for deletion for 30 days. You can configure a different record age and schedule for purge jobs.

Procedure

  1. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  2. Add the dbPurge parameter to the manila template to configure database cleanup.

    The following is an example of using the dbPurge parameter to configure the Shared File Systems service:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      manila:
        template:
          dbPurge:
            age: 20 
    1
    
            schedule: 1 0 * * 0 
    2
    Copy to Clipboard
    1
    The number of days a record has been marked for deletion before it is purged. The default value is 30 days. The minimum value is 1 day.
    2
    The schedule of when to run the job in a crontab format. The default value is 1 0 * * *. This default value is equivalent to 00:01 daily.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml
    Copy to Clipboard

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat