Configuring persistent storage


Red Hat OpenStack Services on OpenShift 18.0

Configuring storage services for Red Hat OpenStack Services on OpenShift

OpenStack Documentation Team

Abstract

Configure the services for block, image, object, and file storage in your Red Hat OpenStack Services on OpenShift deployment.

Providing feedback on Red Hat documentation

We appreciate your feedback. Tell us how we can improve the documentation.

To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.

Procedure

  1. Log in to the Red Hat Atlassian Jira.
  2. Click the following link to open a Create Issue page: Create issue
  3. Select Red Hat OpenStack Services on OpenShift as the Project.
  4. Select Bug as the Issue Type.
  5. Click Next.
  6. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
  7. Select documentation as the Component.
  8. Click Create.
  9. Review the details of the bug you created.

Chapter 1. Configuring persistent storage

When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), you can configure storage services for block, image, object, and file storage. You can configure Red Hat Ceph Storage as a unified back end for all storage services or you can configure alternative back-end storage solutions for these services.

1.1. Ephemeral and persistent storage

RHOSO recognizes two types of storage - ephemeral and persistent:

  • Ephemeral storage is associated with a specific Compute instance. When that instance is terminated, so is the associated ephemeral storage. This type of storage is useful for runtime requirements, such as storing the operating system of an instance.
  • Persistent storage is designed to survive (persist) independent of any running instance. This storage is used for any data that needs to be reused, either by different instances or beyond the life of a specific instance.

RHOSO storage services correspond with the following persistent storage types:

  • Block Storage service (cinder): Volumes
  • Image service (glance): Images
  • Object Storage service (swift): Objects
  • Shared File Systems service (manila): Shares

All persistent storage services store data in a storage back end.

1.2. Supported persistent storage solutions

RHOSO supports the following storage solutions for service back ends:

  • Block Storage service (cinder): Ceph RBD, iSCSI, FC, NVMe-TCP, or NFS back end
  • Image service (glance): Ceph RBD, Block Storage, Object Storage, or NFS back end
  • Object Storage service (swift): PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes
  • Shared File Systems service (manila): CephFS, CephFS-NFS, or alternative back ends such as NetApp or Pure Storage

For information about planning the storage solution and related requirements for your RHOSO deployment, for example, networking and security, see Planning storage and shared file systems in Planning your deployment.

1.3. Red Hat Ceph Storage

Red Hat Ceph Storage can serve as a unified back end for all RHOSO storage services. The features and functionality of RHOSO services are optimized when you use Red Hat Ceph Storage as the storage back end.

Supported Ceph versions and deployment modes

RHOSO supports external deployments of Red Hat Ceph Storage 7, 8, and 9. You can integrate an external Red Hat Ceph Storage cluster with the Compute service (nova) and one or more RHOSO storage services, or you can create a hyperconverged infrastructure (HCI) environment.

For information about creating a hyperconverged infrastructure (HCI) environment, see Deploying a hyperconverged infrastructure environment.

Note

Configuration examples in procedures that reference Red Hat Ceph Storage use Release 7 information. If you are using a later version of Red Hat Ceph Storage, adjust the configuration examples accordingly.

OpenShift Data Foundation integration

You can use Red Hat OpenShift Data Foundation (ODF) in external mode to integrate with Red Hat Ceph Storage. The use of ODF in internal mode is not supported.

For more information about deploying ODF in external mode, see Deploying OpenShift Data Foundation in external mode.

1.4. Storage back end certification

To promote the use of best practices, Red Hat has a certification process for OpenStack back ends. For improved supportability and interoperability, ensure that your storage back end is certified for RHOSO. You can check certification status in the Red Hat Ecosystem Catalog. Ceph RBD is certified as a back end in all RHOSO releases.

You can use the extraMounts parameter to mount external files for configuration or authentication data in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. Use this parameter to distribute Red Hat Ceph Storage configuration files to services, access external NFS shares for temporary storage, or run storage back-end drivers on persistent filesystems.

Example scenarios include:

  • Distributing Red Hat Ceph Storage cluster configuration and keyring files to Block Storage (cinder), Image (glance), and Compute (nova) services
  • Accessing external NFS shares for temporary image storage when node disk space is consumed
  • Configuring storage back-end drivers to run on persistent file systems to preserve data between reboots

The extraMounts parameter can be defined at the following levels:

  • Service - A Red Hat OpenStack Services on OpenShift (RHOSO) service such as Glance, Cinder, or Manila.
  • Component - A component of a service such as GlanceAPI, CinderAPI, CinderScheduler, ManilaShare, CinderBackup.
  • Instance - An individual instance of a particular component. For example, your deployment could have two instances of the component ManilaShare called share1 and share2. An Instance level propagation represents the Pod associated to an instance that is part of the same Component type.

The propagation field is used to describe how the definition is applied. If the propagation field is not used, definitions propagate to every level below the level at which it is defined:

  • Service level definitions propagate to Component and Instance levels.
  • Component level definitions propagate to the Instance level.

The following is the general structure of an extraMounts definition:

extraMounts:
  - name: <extramount-name>
    region: <openstack-region>
    extraVol:
      - propagation:
        - <location>
        extraVolType: <Ceph | Nfs | Undefined>
        volumes:
        - <pod-volume-structure>
        mounts:
        - <pod-mount-structure>
  • name is a string that names the extraMounts definition. This is for organizational purposes and cannot be referenced from other parts of the manifest. This is an optional attribute.
  • region is a string that defines the RHOSO region of the extraMounts definition. This is an optional attribute.
  • propagation describes how the definition is applied. If the propagation field is not used, definitions propagate to every level below the level at which it is defined. This is an optional attribute.
  • extraVolType is a string that assists the administrator in categorizing or labeling the group of mounts that belong to the extraVol entry of the list. There are no defined values for this parameter but the values Ceph, Nfs, and Undefined are common. This is an optional attribute.
  • volumes is a list that defines Red Hat OpenShift volume sources. This field has the same structure as the volumes section in a Pod. The structure is dependent on the type of volume being defined. The name defined in this section is used as a reference in the mounts section.
  • mounts is a list of mountpoints that represent the path where the volumeSource should be mounted in the Pod. The name of a volume from the volumes section is used as a reference as well as the path where it should be mounted. This attribute has the same structure as the volumeMounts attribute for a Pod.

Configure the OpenStackControlPlane custom resource (CR) to access external data for configuration or authentication purposes in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, on your workstation.
  2. Add the extraMounts attribute to the OpenStackControlPlane CR service definition.

    The following example demonstrates adding the extraMounts attribute:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            extraVolType: Ceph
  3. Add the propagation field to specify where in the service definition the extraMount attribute applies.

    The following example adds the propagation field to the previous example:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      glance:
      ...
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - Glance
              extraVolType: Ceph

    The propagation field can have one of the following values:

    • Service level propagations:

      • Glance
      • Cinder
      • Manila
      • Horizon
      • Neutron
    • Component level propagations:

      • CinderAPI
      • CinderScheduler
      • CinderVolume
      • CinderBackup
      • GlanceAPI
      • ManilaAPI
      • ManilaScheduler
      • ManilaShare
      • NeutronAPI
    • Back-end propagation:

      • Any back-end in the CinderVolume, ManilaShare , or GlanceAPI maps.
  4. Define the volume sources:

    The following example demonstrates adding the volumes field to the previous example to provide a Red Hat Ceph Storage secret to the Image service (glance):

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            extraVolType: Ceph
            volumes:
              - name: ceph
                secret:
                  secretName: ceph-conf-files

    where:

    ceph
    Is the Red Hat Ceph Storage secret name.
  5. Define where the different volumes are mounted within the pod.

    The following example demonstrates adding the mounts field to the previous example to provide the location and name of the file that contains the Red Hat Ceph Storage secret:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            extraVolType: Ceph
            volumes:
              - name: ceph
                secret:
                  secretName: ceph-conf-files
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true

    where:

    "/etc/ceph"
    Is the location of the secrets file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

  8. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:

    $ oc get pods -n openstack

    The control plane is deployed when all the pods are either completed or running.

The following configuration examples demonstrate how the extraMounts attribute is used to mount external files. The extraMounts attribute is defined at either the top level custom resource (spec) or the service definition.

Dashboard service (horizon)
This configuration example demonstrates using an external file to provide configuration to the Dashboard service.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
  horizon:
    enabled: true
    template:
      customServiceConfig: '# add your customization here'
      extraMounts:
      - extraVol:
        - extraVolType: HorizonSettings
          mounts:
          - mountPath: /etc/openstack-dashboard/local_settings.d/_66_help_link.py
            name: horizon-config
            readOnly: true
            subPath: _66_help_link.py
          volumes:
            - name: horizon-config
              configMap:
                name: horizon-config
Red Hat Ceph Storage
This configuration example defines the services that require access to the Red Hat Ceph Storage secret.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
  extraMounts:
    - name: v1
      region: r1
      extraVol:
        - propagation:
          - CinderVolume
          - CinderBackup
          - GlanceAPI
          - ManilaShare
          extraVolType: Ceph
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files
          mounts:
          - name: ceph
            mountPath: "/etc/ceph"
            readOnly: true
Shared File Systems service (manila)
This configuration example provides external configuration files to the Shared File Systems service so that it can connect to a Red Hat Ceph Storage back end.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
apiVersion: core.openstack.org/v1beta1
spec:
  manila:
    template:
    ManilaShares:
      share1:
      ...
  extraMounts:
    - name: v1
      region: r1
      extraVol:
        - propagation:
          - share1
          extraVolType: Ceph
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files
          mounts:
          - name: ceph
            mountPath: "/etc/ceph"
            readOnly: true
Image service (glance)
This configuration example connects three glanceAPI instances to a different Red Hat Ceph Storage back end. The instances; api0, api1, and api2; are connected to three different Red Hat Ceph Storage clusters that are named ceph0, ceph1, and ceph2.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
  extraMounts:
    - name: api0
      region: r1
      extraVol:
        - propagation:
          - api0
          volumes:
            - name: ceph0
              secret:
                secretName: <secret_name>
           mounts:
             - name: ceph0
               mountPath: "/etc/ceph"
               readOnly: true
    - name: api1
      region: r1
      extraVol:
        - propagation:
          - api1
          volumes:
            - name: ceph1
              secret:
                secretName: <secret_name>
           mounts:
             - name: ceph1
               mountPath: "/etc/ceph"
               readOnly: true
   - name: api2
     region: r1
     extraVol:
       - propagation:
         - api2
         volumes:
           - name: ceph2
             secret:
               secretName: <secret_name>
          mounts:
            - name: ceph2
              mountPath: "/etc/ceph"
              readOnly: true

Chapter 3. Integrating Red Hat Ceph Storage

You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the Block Storage (cinder), Image (glance), Object Storage (swift), Compute (nova), and Shared File Systems (manila) services to the cluster.

To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks:

  1. Verify that Red Hat Ceph Storage is deployed and all the required services are running.
  2. Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster.
  3. Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster.
  4. Obtain the Ceph file system identifier.
  5. Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end.
  6. Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end.

3.1. Prerequisites

  • Access to a Red Hat Ceph Storage cluster.
  • The RHOSO control plane is installed on an operational RHOSO cluster.

3.2. Creating Red Hat Ceph Storage pools

Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster.

Considerations
  • If you are deploying the NFS service for the Shared File Systems service (manila):

    • Do not select a custom port. Only the default NFS port of 2049 is supported, and you must enable the Red Hat Ceph Storage ingress service with ingress-mode set to haproxy-protocol when creating the NFS cluster.
    • With Red Hat Ceph Storage 9, NFSv3 is not enabled by default. If you need NFSv3 support, you must include the --enable-nfsv3 parameter when creating the NFS cluster.
    • For security in production environments, do not provide access to 0.0.0.0/0 on shares to mount them on client machines.

Prerequisites

Procedure

  1. Enter the cephadm container client:

    $ sudo cephadm shell
  2. Create pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images):

    $ for P in vms volumes images; do
       ceph osd pool create $P;
       ceph osd pool application enable $P rbd;
    done
  3. If you are using the Shared File Systems service, create the cephfs volume. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster:

    $ ceph fs volume create cephfs
  4. If you are using the Shared File Systems service with CephFS-NFS, deploy an NFS service on the Red Hat Ceph Storage cluster:

    1. If you are deploying Red Hat Ceph Storage 7 or 8, run the following command:

      $ ceph nfs cluster create cephfs \
      --ingress --virtual-ip=<vip> \
      --ingress-mode=haproxy-protocol
    2. If you are deploying Red Hat Ceph Storage 9, run the following command:

      $ ceph nfs cluster create cephfs \
      --ingress --virtual-ip=<vip> \
      --ingress-mode=haproxy-protocol \
      --enable-nfsv3
      • Replace <vip> with the IP address assigned to the NFS service. The NFS service should be on a dedicated network that isolates NFS traffic while allowing RHOSO users to attach their Compute instances to access shares.
  5. Create a CephX key for RHOSO to use to access pools:

    $ ceph auth add client.openstack \
         mgr 'allow *' \
            mon 'profile rbd' \
            osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images'
    • If you are using the Shared File Systems service, add osd caps for the CephFS data pool by using the following command instead:

      $ ceph auth add client.openstack \
           mgr 'allow *' \
              mon 'profile rbd' \
              osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.data'
  6. Export the CephX key:

    $ ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyring
  7. Export the configuration file:

    $ ceph config generate-minimal-conf > /etc/ceph/ceph.conf

3.3. Creating a Red Hat Ceph Storage secret

Create a secret so that services can access the Red Hat Ceph Storage cluster.

The procedure examples use openstack as the name of the Red Hat Ceph Storage user. The file name in the Secret resource must match this user name.

For example, if the file name for the username openstack2 is /etc/ceph/ceph.client.openstack2.keyring, then the secret data line should be ceph.client.openstack2.keyring: $KEY.

Procedure

  1. Transfer the cephx key and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in the openstack namespace.
  2. Base64 encode these files and store them in KEY and CONF environment variables:

    $ KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0)
    $ CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)
  3. Create a YAML file to create the Secret resource.
  4. Using the environment variables, add the Secret configuration to the YAML file:

    apiVersion: v1
    data:
      ceph.client.openstack.keyring: $KEY
      ceph.conf: $CONF
    kind: Secret
    metadata:
      name: ceph-conf-files
      namespace: openstack
    type: Opaque
  5. Save the YAML file.
  6. Create the Secret resource:

    $ oc create -f <secret_configuration_file>
    • Replace <secret_configuration_file> with the name of the YAML file you created.

The Red Hat Ceph Storage file system identifier (FSID) is a unique identifier for the cluster. Use the FSID to configure and verify cluster interoperability with Red Hat OpenStack Services on OpenShift (RHOSO).

Procedure

  • Extract the FSID from the Red Hat Ceph Storage secret:

    $ FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')

Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster. This process includes confirming network configuration, configuring the control plane to use the Red Hat Ceph Storage secret, and setting up Image (glance), Block Storage (cinder), and optionally Shared File Systems (manila) services.

Note

This example does not include configuring Block Storage backup service (cinder-backup) with Red Hat Ceph Storage.

Procedure

  1. Check the storage interface defined in your NodeNetworkConfigurationPolicy (nncp) custom resource to confirm that it has the same network configuration as the public_network of the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through the Storage network. The Storage network should have the same network configuration as the public_network of the Red Hat Ceph Storage cluster.

    It is not necessary for RHOSO to access the cluster_network of the Red Hat Ceph Storage cluster.

    Note

    If it does not impact workload performance, the Storage network can be different from the external Red Hat Ceph Storage cluster public_network using routed (L3) connectivity as long as the appropriate routes are added to the Storage network to reach the external Red Hat Ceph Storage cluster public_network.

  2. Check the networkAttachments for the default Image service instance in the OpenStackControlPlane CR to confirm that the default Image service is configured to access the Storage network:

    glance:
        enabled: true
        template:
          databaseInstance: openstack
          storage:
            storageRequest: 10G
          glanceAPIs:
            default
              replicas: 3
              override:
                service:
                  internal:
                    metadata:
                      annotations:
                        metallb.universe.tf/address-pool: internalapi
                        metallb.universe.tf/allow-shared-ip: internalapi
                        metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                    spec:
                      type: LoadBalancer
              networkAttachments:
              - storage
  3. Confirm the Block Storage service is configured to access the Storage network through MetalLB.
  4. Optional: Confirm the Shared File Systems service is configured to access the Storage network through ManilaShare.
  5. Confirm the Compute service (nova) is configured to access the Storage network.
  6. Confirm the Red Hat Ceph Storage configuration file, /etc/ceph/ceph.conf, contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within the Storage network IP address range.
  7. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  8. Add the extraMounts parameter to define the services that require access to the Red Hat Ceph Storage secret.

    The following is an example of using the extraMounts parameter for this purpose. Only include ManilaShare in the propagation list if you are using the Shared File Systems service (manila):

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - CinderVolume
              - GlanceAPI
              - ManilaShare
              extraVolType: Ceph
              volumes:
              - name: ceph
                projected:
                  sources:
                  - secret:
                      name: <ceph-conf-files>
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
  9. Add the customServiceConfig parameter to the glance template to configure the Image service to use the Red Hat Ceph Storage cluster:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:rbd
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            rbd_store_ceph_conf = /etc/ceph/ceph.conf
            store_description = "RBD backend"
            rbd_store_pool = images
            rbd_store_user = openstack
          databaseInstance: openstack
          databaseAccount: glance
          secret: osp-secret
          storage:
            storageRequest: 10G
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - GlanceAPI
              extraVolType: Ceph
              volumes:
              - name: ceph
                secret:
                  secretName: ceph-conf-files
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
    • Replace <backend_name> with the name of the default back end.

      When you use Red Hat Ceph Storage as a back end for the Image service, image-conversion is enabled by default. For more information, see Planning storage and shared file systems in Planning your deployment.

  10. Add the customServiceConfig parameter to the cinder template to configure the Block Storage service to use the Red Hat Ceph Storage cluster. For information about using Block Storage backups, see Configuring the Block Storage backup service.

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        ...
      cinder:
        template:
          cinderVolumes:
            ceph:
              customServiceConfig: |
                [DEFAULT]
                enabled_backends=ceph
                [ceph]
                volume_backend_name=ceph
                volume_driver=cinder.volume.drivers.rbd.RBDDriver
                rbd_ceph_conf=/etc/ceph/ceph.conf
                rbd_user=openstack
                rbd_pool=volumes
                rbd_flatten_volume_from_snapshot=False
                rbd_secret_uuid=<$FSID>
  11. Optional: Add the customServiceConfig parameter to the manila template to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila).

    The following example exposes native CephFS:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
      ...
      manila:
        template:
          manilaAPI:
            customServiceConfig: |
              [DEFAULT]
              enabled_share_protocols=cephfs
          manilaShares:
            share1:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfs
                [cephfs]
                driver_handles_share_servers=False
                share_backend_name=cephfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_conf_path=/etc/ceph/ceph.conf
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_volume_mode=0755
                cephfs_protocol_helper_type=CEPHFS

    The following example exposes CephFS with NFS:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
      ...
      manila:
        template:
          manilaAPI:
            customServiceConfig: |
              [DEFAULT]
              enabled_share_protocols=nfs
          manilaShares:
            share1:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfsnfs
                [cephfsnfs]
                driver_handles_share_servers=False
                share_backend_name=cephfsnfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_conf_path=/etc/ceph/ceph.conf
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_volume_mode=0755
                cephfs_protocol_helper_type=NFS
                cephfs_nfs_cluster_id=cephfs
  12. Apply the updates to the OpenStackControlPlane CR:

    $ oc apply -f openstack_control_plane.yaml

Configure the data plane to use the Red Hat Ceph Storage cluster.

Procedure

  1. Create a ConfigMap with additional content for the Compute service (nova) configuration file /etc/nova/nova.conf.d/ inside the nova_compute container. This additional content directs the Compute service to use Red Hat Ceph Storage RBD.

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: ceph-nova
    data:
     <03-ceph-nova.conf>: |
      [libvirt]
      images_type=rbd
      images_rbd_pool=vms
      images_rbd_ceph_conf=/etc/ceph/ceph.conf
      images_rbd_glance_store_name=<backend_name>
      images_rbd_glance_copy_poll_interval=15
      images_rbd_glance_copy_timeout=600
      rbd_user=openstack
      rbd_secret_uuid=<$FSID>
    • Replace <03-ceph-nova.conf> with your file name. This file name must follow the naming convention of ##-<name>-nova.conf. Files are evaluated by the Compute service alphabetically. A filename that starts with 01 will be evaluated by the Compute service before a filename that starts with 02. When the same configuration option occurs in multiple files, the last one read wins.
    • Replace <backend_name> with the name of the back end specified in the glance template of the OpenStackControlPlane CR.
    • Replace <$FSID> with the actual FSID, as described in the Obtaining the Ceph FSID section. The FSID itself does not need to be considered secret.
  2. Create a custom version of the default nova service to use the new ConfigMap, which in this case is called ceph-nova.

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneService
    metadata:
      name: nova-custom-ceph
    spec:
      caCerts: combined-ca-bundle
      edpmServiceType: nova
      dataSources:
       - configMapRef:
           name: ceph-nova
       - secretRef:
           name: nova-cell1-compute-config
       - secretRef:
           name: nova-migration-ssh-key
      playbook: osp.edpm.nova
    • The custom service is named nova-custom-ceph. It cannot be named nova because nova is an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation.
  3. Apply the ConfigMap and custom service changes:

    $ oc create -f ceph-nova.yaml
  4. In your OpenStackDataPlaneNodeSet CR, update the list of services by adding the ceph-client service and replacing the default nova service with the new custom service, for example nova-custom-ceph. Add the extraMounts parameter to define access to the Ceph Storage secret.

    Example:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneNodeSet
    spec:
      ...
      services:
      - redhat
      - bootstrap
      - download-cache
      - configure-network
      - validate-network
      - install-os
      - configure-os
      - ssh-known-hosts
      - run-os
      - reboot-os
      - install-certs
      - ceph-client
      - ovn
      - neutron-metadata
      - libvirt
      - nova-custom-ceph
      - telemetry
    
      nodeTemplate:
        extraMounts:
        - extraVolType: Ceph
          volumes:
          - name: ceph
            secret:
              secretName: ceph-conf-files
          mounts:
          - name: ceph
            mountPath: "/etc/ceph"
            readOnly: true
    • You must add the ceph-client service before the ovn, libvirt, and nova-custom-ceph services in the list of services. The ceph-client service configures data plane nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files.
    • This example might not list all of the services in your environment. You can run the following command to verify the list of services in your environment:

      $ oc get -n openstack crd/openstackdataplanenodesets.dataplane.openstack.org -o yaml |yq -r '.spec.versions.[].schema.openAPIV3Schema.properties.spec.properties.services.default

      For more information, see Data plane services.

  5. Save the changes to the services list.
  6. Create an OpenStackDataPlaneDeployment CR:

    $ oc create -f <dataplanedeployment_cr_file>
    • Replace <dataplanedeployment_cr_file> with the name of your file.

      The Ansible job for the nova-custom-ceph service copies overrides from the ConfigMap to the Compute service hosts. The Ansible job also uses virsh secret-* commands so the libvirt service retrieves the cephx secret by FSID.

Verification

  • Run the following command outside of a nova_compute container to confirm the results of the Ansible job:

    $ sudo virsh secret-get-value $FSID

You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end. You use the openstack client tool to configure the Object Storage service.

Procedure

  1. Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
  2. Deploy and configure a RGW service to handle object storage requests.

3.7.1. Configuring RGW authentication

You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.

Prerequisites

  • You have deployed an operational OpenStack control plane.

Procedure

  1. Create the Object Storage service on the control plane:

    $ openstack service create --name swift --description "OpenStack Object Storage" object-store
  2. Create a user called swift:

    $ openstack user create --project service --password <swift_password> swift
    • Replace <swift_password> with the password to assign to the swift user.
  3. Create roles for the swift user:

    $ openstack role create swiftoperator
    $ openstack role create ResellerAdmin
  4. Add the swift user to system roles:

    $ openstack role add --user swift --project service member
    $ openstack role add --user swift --project service admin
  5. Export the RGW endpoint IP addresses to variables and create control plane endpoints:

    $ export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage>
    $ export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external>
    $ openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s;
    $ openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;
    • Replace <rgw_endpoint_ip_address_storage> with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW.
    • Replace <rgw_endpoint_ip_address_external> with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW.

      Note

      Both endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by haproxy and keepalived, used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service.

  6. Add the swiftoperator role to the control plane admin group:

    $ openstack role add --project admin --user admin swiftoperator

3.7.2. Configuring and deploying the RGW service

Configure and deploy a RGW service to handle object storage requests.

Procedure

  1. Log in to a Red Hat Ceph Storage Controller node.
  2. Create a file called /tmp/rgw_spec.yaml and add the RGW deployment parameters:

    service_type: rgw
    service_id: rgw
    service_name: rgw.rgw
    placement:
      hosts:
        - <host_1>
        - <host_2>
        ...
        - <host_n>
    networks:
    - <storage_network>
    spec:
      rgw_frontend_port: 8082
      rgw_realm: default
      rgw_zone: default
    ---
    service_type: ingress
    service_id: rgw.default
    service_name: ingress.rgw.default
    placement:
      count: 1
    spec:
      backend_service: rgw.rgw
      frontend_port: 8080
      monitor_port: 8999
      virtual_ips_list:
      - <storage_network_vip>
      - <external_network_vip>
      virtual_interface_networks:
      - <storage_network>
    • Replace <host_1>, <host_2>, …, <host_n> with the name of the Ceph nodes where the RGW instances are deployed.
    • Replace <storage_network> with the network range used to resolve the interfaces where radosgw processes are bound.
    • Replace <storage_network_vip> with the virtual IP (VIP) used as the haproxy front end. This is the same address configured as the Object Storage service endpoint ($RGW_ENDPOINT) in the Configuring RGW authentication procedure.
    • Optional: Replace <external_network_vip> with an additional VIP on an external network to use as the haproxy front end. This address is used to connect to RGW from an external network.
  3. Save the file.
  4. Enter the cephadm shell and mount the rgw_spec.yaml file.

    $ cephadm shell -m /tmp/rgw_spec.yaml
  5. Add RGW related configuration to the cluster:

    $ ceph config set global rgw_keystone_url "https://<keystone_endpoint>"
    $ ceph config set global rgw_keystone_verify_ssl false
    $ ceph config set global rgw_keystone_api_version 3
    $ ceph config set global rgw_keystone_accepted_roles "member, Member, admin"
    $ ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator"
    $ ceph config set global rgw_keystone_admin_domain default
    $ ceph config set global rgw_keystone_admin_project service
    $ ceph config set global rgw_keystone_admin_user swift
    $ ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD"
    $ ceph config set global rgw_keystone_implicit_tenants true
    $ ceph config set global rgw_s3_auth_use_keystone true
    $ ceph config set global rgw_swift_versioning_enabled true
    $ ceph config set global rgw_swift_enforce_content_length true
    $ ceph config set global rgw_swift_account_in_url true
    $ ceph config set global rgw_trust_forwarded_https true
    $ ceph config set global rgw_max_attr_name_len 128
    $ ceph config set global rgw_max_attrs_num_in_req 90
    $ ceph config set global rgw_max_attr_size 1024
    • Replace <keystone_endpoint> with the Identity service internal endpoint. The data plane nodes can resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be either http:// or https://.
    • Replace <swift_password> with the password assigned to the swift user in the previous step.
  6. Deploy the RGW configuration using the Orchestrator:

    $ ceph orch apply -i /mnt/rgw_spec.yaml

Configure RGW with TLS so that control plane services can resolve external Red Hat Ceph Storage cluster host names. This procedure configures Ceph RGW to emulate the Object Storage service (swift).

In this procedure, you configure the following:

  • A DNS zone and certificate so that a URL such as https://rgw-external.ceph.local:8080 is registered as an Identity service (keystone) endpoint, and {rhos_log} can securely access the HTTPS endpoint.
  • A DNSData domain, for example ceph.local so that pods can map host names to IP addresses for services that are not hosted on RHOCP.
  • DNS forwarding for the domain with the CoreDNS service.
  • A certificate by using the RHOSO public root certificate authority.

You must copy the certificate and key file created in RHOCP to the nodes hosting RGW so they can become part of the Ceph Orchestrator RGW specification.

Considerations
  • DNSData custom resource: Creating a DNSData CR creates a new dnsmasq pod that is able to read and resolve the DNS information in the associated DNSData CR.
  • Certificate authority: The certificate issuerRef is set to the root certificate authority (CA) of RHOSO. This CA is automatically created when the control plane is deployed. The default name of the CA is rootca-public. The RHOSO pods trust this new certificate because the root CA is used.

Procedure

  1. Create a DNSData custom resource (CR) for the external Ceph cluster.

    Example DNSData CR:

    apiVersion: network.openstack.org/v1beta1
    kind: DNSData
    metadata:
      labels:
        component: ceph-storage
        service: ceph
      name: ceph-storage
      namespace: openstack
    spec:
      dnsDataLabelSelectorValue: dnsdata
      hosts:
        - hostnames:
          - ceph-rgw-internal-vip.ceph.local
          ip: <172.18.0.2>
        - hostnames:
          - ceph-rgw-external-vip.ceph.local
          ip: <10.10.10.2>
    • Replace <172.18.0.2> with the correct host for your environment. In this example, the host at the IP address 172.18.0.2 hosts the Ceph RGW endpoint for access on the private storage network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host name ceph-rgw-internal-vip.ceph.local.
    • Replace <10.10.10.2> with the correct host for your environment. In this example, the host at the IP address 10.10.10.2 hosts the Ceph RGW endpoint for access on the external network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host name ceph-rgw-external-vip.ceph.local.
  2. Apply the CR to your environment:

    $ oc apply -f <ceph_dns_yaml>
    • Replace <ceph_dns_yaml> with the name of the DNSData CR file.
  3. Update the CoreDNS CR to configure DNS forwarding to the dnsmasq service for requests to the ceph.local domain. For more information about DNS forwarding, see Using DNS forwarding in the RHOCP Networking guide.
  4. List the openstack domain DNS cluster IP address:

    $ oc get svc dnsmasq-dns

    Example output:

    $ oc get svc dnsmasq-dns
    dnsmasq-dns     LoadBalancer   10.217.5.130   192.168.122.80    53:30185/UDP     160m
  5. Record the DNS cluster IP address from the command output for DNS forwarding.
  6. List the CoreDNS CR:

    $ oc -n openshift-dns describe dns.operator/default
  7. Edit the CoreDNS CR and add the servers configuration to the spec section with the DNS cluster IP address.

    Example CoreDNS CR updated with the DNS cluster IP address:

    apiVersion: operator.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: "2024-03-25T02:49:24Z"
      finalizers:
      - dns.operator.openshift.io/dns-controller
      generation: 3
      name: default
      resourceVersion: "164142"
      uid: 860b0e61-a48a-470e-8684-3b23118e6083
    spec:
      cache:
        negativeTTL: 0s
        positiveTTL: 0s
      logLevel: Normal
      nodePlacement: {}
      operatorLogLevel: Normal
      servers:
      - forwardPlugin:
          policy: Random
          upstreams:
          - 10.217.5.130:53
        name: ceph
        zones:
        - ceph.local
      upstreamResolvers:
        policy: Sequential
        upstreams:
        - port: 53
          type: SystemResolvConf

    where:

    servers
    Defines DNS forwarding configurations for specific domains.
    upstreams
    Specifies the DNS cluster IP address to which DNS queries are forwarded.
    10.217.5.130:53
    Is the DNS cluster IP address recorded from the oc get svc dnsmasq-dns command.
    zones
    Defines the domain for which DNS queries are forwarded to the upstream server.
  8. Create a Certificate CR with the host names from the DNSData CR.

    Example Certificate CR:

    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: cert-ceph-rgw
      namespace: openstack
    spec:
      duration: 43800h0m0s
      issuerRef: {'group': 'cert-manager.io', 'kind': 'Issuer', 'name': 'rootca-public'}
      secretName: cert-ceph-rgw
      dnsNames:
        - ceph-rgw-internal-vip.ceph.local
        - ceph-rgw-external-vip.ceph.local
  9. Apply the CR to your environment:

    $ oc apply -f <ceph_cert_yaml>
    • Replace <ceph_cert_yaml> with the name of the Certificate CR file.
  10. Extract the certificate and key data from the secret created when the Certificate CR was applied:

    $ oc get secret <ceph_cert_secret_name> -o yaml
    • Replace <ceph_cert_secret_name> with the name used in the secretName field of your Certificate CR.

      Example output:

      [stack@osp-storage-04 ~]$ oc get secret cert-ceph-rgw -o yaml
      apiVersion: v1
      data:
        ca.crt: <CA>
        tls.crt: <b64cert>
        tls.key: <b64key>
      kind: Secret
      • The <b64cert> and <b64key> values are the base64-encoded certificate and key strings that you must use in the next step.
  11. Extract and base64 decode the certificate and key information obtained in the previous step.

    Extract and decode the certificate:

    $ oc get secret <ceph_cert_secret_name> -o yaml | awk '/tls.crt/ {print $2}' | base64 -d

    Extract and decode the key:

    $ oc get secret <ceph_cert_secret_name> -o yaml | awk '/tls.key/ {print $2}' | base64 -d
  12. If you are using Red Hat Ceph Storage 7 or 8, concatenate the decoded certificate and key values with no spaces in between, and save them in the Ceph Object Gateway service specification.

    The rgw section of the specification file looks like the following:

      service_type: rgw
      service_id: rgw
      service_name: rgw.rgw
      placement:
        hosts:
        - host1
        - host2
      networks:
        - 172.18.0.0/24
      spec:
        rgw_frontend_port: 8082
        rgw_realm: default
        rgw_zone: default
        ssl: true
        rgw_frontend_ssl_certificate: |
        -----BEGIN CERTIFICATE-----
        MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw
        <redacted>
        -----END CERTIFICATE-----
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB
        <redacted>
        -----END RSA PRIVATE KEY-----

    The ingress section of the specification file looks like the following:

      service_type: ingress
      service_id: rgw.default
      service_name: ingress.rgw.default
      placement:
        count: 1
      spec:
        backend_service: rgw.rgw
        frontend_port: 8080
        monitor_port: 8999
        virtual_interface_networks:
        - 172.18.0.0/24
        virtual_ip: 172.18.0.2/24
        ssl_cert: |
        -----BEGIN CERTIFICATE-----
        MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw
        <redacted>
        -----END CERTIFICATE-----
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB
        <redacted>
        -----END RSA PRIVATE KEY-----

    where:

    rgw_frontend_ssl_certificate
    Contains the base64 decoded values from both <b64cert> and <b64key> in the previous step with no spaces in between.
    ssl_cert
    Contains the base64 decoded values from both <b64cert> and <b64key> in the previous step with no spaces in between.
  13. If you are using Red Hat Ceph Storage 9, save the decoded certificate and key values separately in the Ceph Object Gateway service specification.

    The rgw section of the specification file looks like the following:

      service_type: rgw
      service_id: rgw
      service_name: rgw.rgw
      placement:
        hosts:
        - host1
        - host2
      networks:
        - 172.18.0.0/24
      spec:
        rgw_frontend_port: 8082
        rgw_realm: default
        rgw_zone: default
        ssl: true
        certificate_source: inline
        ssl_cert: |
        -----BEGIN CERTIFICATE-----
        MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw
        <redacted>
        -----END CERTIFICATE-----
        ssl_key: |
        -----BEGIN PRIVATE KEY-----
        MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB
        <redacted>
        -----END PRIVATE KEY-----

    The ingress section of the specification file looks like the following:

      service_type: ingress
      service_id: rgw.default
      service_name: ingress.rgw.default
      placement:
        count: 1
      spec:
        backend_service: rgw.rgw
        frontend_port: 8080
        monitor_port: 8999
        virtual_interface_networks:
        - 172.18.0.0/24
        virtual_ip: 172.18.0.2/24
        ssl: true
        certificate_source: inline
        ssl_cert: |
        -----BEGIN CERTIFICATE-----
        MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw
        <redacted>
        -----END CERTIFICATE-----
        ssl_key: |
        -----BEGIN PRIVATE KEY-----
        MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB
        <redacted>
        -----END PRIVATE KEY-----

    where:

    certificate_source: inline
    Specifies that the certificate and key are embedded directly in the specification.
    ssl_cert
    Contains the base64 decoded certificate value from <b64cert> in the previous step.
    ssl_key

    Contains the base64 decoded key value from <b64key> in the previous step.

    Note

    In Red Hat Ceph Storage 9, the rgw_frontend_ssl_certificate field, which required concatenated certificate and key values, is deprecated. New deployments must use the separate ssl_cert and ssl_key fields.

  14. Use the procedure "Deploying the Ceph Object Gateway using the service specification" to deploy Ceph RGW with SSL. For more information, see the Red Hat Ceph Storage Operations Guide:

  15. Connect to the openstackclient pod.
  16. Verify that DNS forwarding has been successfully configured.

    $ curl --trace - <host_name>
    • Replace <host_name> with the name of the external host previously added to the DNSData CR.

      Example output:

      sh-5.1$ curl https://rgw-external-vip.ceph.local:8080
      <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
      .1$
      sh-5.1$
    • In this example, the openstackclient pod successfully resolved the host name, and no SSL verification errors were encountered.

Enable deferred deletion in the Ceph RBD Clone v2 API to delete volumes or images with dependencies. The volume or image is removed from the service but stored in a Ceph RBD trash area until dependencies are resolved. The volume or image is only deleted from Ceph RBD when there are no dependencies.

Note

The trash area maintained by deferred deletion does not provide restoration functionality. When volumes or images are moved to the trash area, they cannot be recovered or restored. The trash area serves only as a holding mechanism for the volume or image until all dependencies have been removed. The volume or image will be permanently deleted once no dependencies exist.

Limitations
  • When you enable Clone v2 deferred deletion in existing environments, the feature only applies to new volumes or images.

Procedure

  1. Verify which Ceph version the clients in your Ceph Storage cluster are running:

    $ cephadm shell -- ceph osd get-require-min-compat-client

    Example output:

    luminous
  2. To set the cluster to use the Clone v2 API and the deferred deletion feature by default, set min-compat-client to mimic. Only clients in the cluster that are running Ceph version 13.2.x (Mimic) can access images with dependencies:

    $ cephadm shell -- ceph osd set-require-min-compat-client mimic
  3. Schedule an interval for trash purge in minutes by using the m suffix:

    $ rbd trash purge schedule add --pool <pool> <30m>
    • Replace <pool> with the name of the associated storage pool, for example, volumes in the Block Storage service.
    • Replace <30m> with the interval in minutes that you want to specify for trash purge.
  4. Verify a trash purge schedule has been set for the pool:

    $ rbd trash purge schedule list --pool <pool>

If Compute (nova), Block Storage (cinder), or Image (glance) service integration with Red Hat Ceph Storage RBD fails, use this incremental troubleshooting procedure. This example focuses on Image service integration, but you can adapt it for other services.

If you discover the cause of your issue before completing this procedure, it is not necessary to do any subsequent steps. You can exit this procedure and resolve the issue.

Procedure

  1. Determine if any parts of the control plane are not properly deployed by assessing whether the Ready condition is not True:

    $ oc get -n openstack OpenStackControlPlane \
      -o jsonpath="{range .items[0].status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
    1. If you identify a service that is not properly deployed, check the status of the service.

      The following example checks the status of the Compute service:

      $ oc get -n openstack Nova/nova \
        -o jsonpath="{range .status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
      • You can check the status of all deployed services:

        $ oc get pods -n openstack
      • You can check the logs of a specific service:

        $ oc logs -n openstack <service_pod_name>
        • Replace <service_pod_name> with the name of the service pod you want to check.
    2. If you identify an operator that is not properly deployed, check the status of the operator:

      $ oc get pods -n openstack-operators -lopenstack.org/operator-name
      • You can check the operator logs:

        $ oc logs -n openstack-operators -lopenstack.org/operator-name=<operator_name>
  2. Check the Status of the data plane deployment:

    $ oc get -n openstack OpenStackDataPlaneDeployment
    • If the Status of the data plane deployment is False, check the logs of the associated Ansible job:

      $ oc logs -n openstack job/<ansible_job_name>
      • Replace <ansible_job_name> with the name of the associated job. The job name is listed in the Message field of the oc get -n openstack OpenStackDataPlaneDeployment command output.
  3. Check the Status of the data plane node set deployment:

    $ oc get -n openstack OpenStackDataPlaneNodeSet
    • If the Status of the data plane node set deployment is False, check the logs of the associated Ansible job:

      $ oc logs -n openstack job/<ansible_job_name>
      • Replace <ansible_job_name> with the name of the associated job. It is listed in the Message field of the oc get -n openstack OpenStackDataPlaneNodeSet command output.
  4. If any pods are in the CrashLookBackOff state, you can duplicate them for troubleshooting purposes:

    $ oc debug <pod_name>
    • Replace <pod_name> with the name of the pod to duplicate.
  5. Optional: You can route traffic to the duplicate pod during the debug process:

    $ oc debug <pod_name> --keep-labels=true
  6. Optional: You can use the oc debug command in the following object debugging activities:

    • To run /bin/sh on a container other than the first one, the command’s default behavior, using the command form oc debug -container <pod_name> <container_name>. This is useful for pods like the API where the first container is tailing a file and the second container is the one you want to debug. If you use this command form, you must first use the command oc get pods | grep <search_string> to find the container name.
    • To create any resource that creates pods such as Deployments, StatefulSets, and Nodes, use the command form oc debug <resource_type>/<resource_name>. An example of creating a StatefulSet would be oc debug StatefulSet/cinder-scheduler.
  7. Connect to the pod and confirm that the ceph.client.openstack.keyring and ceph.conf files are present in the /etc/ceph directory.

    $ oc rsh <pod_name>
    • Replace <pod_name> with the name of the applicable pod.
    • If the Ceph configuration files are missing, check the extraMounts parameter in your OpenStackControlPlane CR.
  8. Confirm the pod has a network connection to the Red Hat Ceph Storage cluster by connecting to the IP and port of a Ceph Monitor from the pod. The IP and port information is located in /etc/ceph.conf.

    The following is an example of this process:

    $ oc get pods | grep glance | grep external-api-0
    glance-06f7a-default-external-api-0                               3/3     Running     0              2d3h
    $ oc debug --container glance-api glance-06f7a-default-external-api-0
    Starting pod/glance-06f7a-default-external-api-0-debug-p24v9, command was: /usr/bin/dumb-init --single-child -- /bin/bash -c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start
    Pod IP: 192.168.25.50
    If you don't see a command prompt, try pressing enter.
    sh-5.1# cat /etc/ceph/ceph.conf
    # Ansible managed
    
    [global]
    
    fsid = 63bdd226-fbe6-5f31-956e-7028e99f1ee1
    mon host = [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
    
    
    [client.libvirt]
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/ceph/qemu-guest-$pid.log
    
    sh-5.1# python3
    Python 3.9.19 (main, Jul 18 2024, 00:00:00)
    [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import socket
    >>> s = socket.socket()
    >>> ip="192.168.122.100"
    >>> port=3300
    >>> s.connect((ip,port))
    >>>
  9. Optional: If you cannot connect to a Ceph Monitor, troubleshoot the network connection between the cluster and pod. The previous example uses a Python socket to connect to the IP and port of the Red Hat Ceph Storage cluster from the ceph.conf file.

    • There are two potential outcomes from the execution of the s.connect((ip,port)) function:

      • If the command executes successfully and there is no error similar to the following example, the network connection between the pod and cluster is functioning correctly. If the connection is functioning correctly, the command execution will provide no return value at all.
      • If the command takes a long time to execute and returns an error similar to the following example, the network connection between the pod and cluster is not functioning correctly. It should be investigated further to troubleshoot the connection.

        Traceback (most recent call last):
          File "<stdin>", line 1, in <module>
        TimeoutError: [Errno 110] Connection timed out
  10. Examine the cephx key as shown in the following example:

    bash-5.1$ cat /etc/ceph/ceph.client.openstack.keyring
    [client.openstack]
       key = "<redacted>"
       caps mgr = allow *
       caps mon = profile rbd
       caps osd = profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images
    bash-5.1$
  11. List the contents of a pool from the caps osd parameter as shown in the following example:

    $ /usr/bin/rbd --conf /etc/ceph/ceph.conf \
    --keyring /etc/ceph/ceph.client.openstack.keyring \
    --cluster ceph --id openstack \
    ls -l -p <pool_name> | wc -l
    • Replace <pool_name> with the name of the required Red Hat Ceph Storage pool.
    • If this command returns the number 0 or greater, the cephx key provides adequate permissions to connect to, and read information from, the Red Hat Ceph Storage cluster.
    • If this command does not complete but network connectivity to the cluster was confirmed, work with the Ceph administrator to obtain the correct cephx keyring.
    • Check if there is an MTU mismatch on the Storage network. If the network is using jumbo frames (an MTU value of 9000), all switch ports between servers using the interface must be updated to support jumbo frames. If this change is not made on the switch, problems can occur at the Ceph application layer. Verify all hosts using the network can communicate at the desired MTU with a command such as ping -M do -s 8972 <ip_address>.
  12. Send test data to the images pool on the Ceph cluster.

    The following is an example of performing this task:

    # DATA=$(date | md5sum | cut -c-12)
    # POOL=images
    # RBD="/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack"
    # $RBD create --size 1024 $POOL/$DATA
    Tip

    It is possible to be able to read data from the cluster but not have permissions to write data to it, even if write permission was granted in the cephx keyring. If you have write permissions, but you cannot write data to the cluster, the cluster might be overloaded and not able to write new data.

    In the example, the rbd command did not complete successfully and was canceled. It was subsequently confirmed the cluster itself did not have the resources to write new data. The issue was resolved on the cluster itself. There was nothing incorrect with the client configuration.

3.11. Troubleshooting Red Hat Ceph Storage clients

Put Red Hat OpenStack Services on OpenShift (RHOSO) Ceph clients in debug mode to troubleshoot their operation.

Procedure

  1. Locate the Red Hat Ceph Storage configuration file mapped in the Red Hat OpenShift secret created in Creating a Red Hat Ceph Storage secret.
  2. Modify the contents of the configuration file to include troubleshooting-related configuration.

    The following is an example of troubleshooting-related configuration:

    [client.openstack]
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
    log file = /var/log/guest-$pid.log
    debug ms = 1
    debug rbd = 20
    log to file = true
    Note

    For more information about troubleshooting, see the Red Hat Ceph Storage Troubleshooting Guide:

  3. Update the secret with the new content.

Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7, 8, and 9. For information about customizing and managing Ceph Storage, see the documentation sets for your Ceph Storage version.

For complete documentation, see:

You can use the Block Storage service (cinder) to access remote block storage devices through volumes for persistent storage. The service has three mandatory components, api, scheduler, and volume, and one optional component, backup.

Note

As a security hardening measure, the Block Storage services run as the cinder user.

All Block Storage services use the cinder section of the OpenStackControlPlane custom resource (CR) for their configuration:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:

Global configuration options are applied directly under the cinder and template sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    <global-options>
    template:
      <global-options>
      cinderAPI:
        <cinder-api-options>
      cinderScheduler:
        <cinder-scheduler-options>
      cinderVolumes:
        <name1>: <cinder-volume-options>
        <name2>: <cinder-volume-options>
      cinderBackup:
        <cinder-backup-options>

The following terms are important to understanding the Block Storage service (cinder):

  • Storage back end: A physical storage system where volume data is stored.
  • Cinder driver: The part of the Block Storage service that enables communication with the storage back end. It is configured with the volume_driver and backup_driver options.
  • Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the volume_backend_name option.
  • Storage pool: A logical grouping of volumes in a given storage back end.
  • Cinder pool: A representation in the Block Storage service of a storage pool.
  • Volume host: The way the Block Storage service address volumes. There are two different representations, short (<hostname>@<backend-name>) and full (<hostname>@<backend-name>#<pool-name>).
  • Quota: Limits defined per project to constrain the use of Block Storage specific resources.

The following functionality enhancements have been integrated into the Block Storage service:

  • Ease of deployment for multiple volume back ends.
  • Back end deployment does not affect running volume back ends.
  • Back end addition and removal does not affect running back ends.
  • Back end configuration changes do not affect other running back ends.
  • Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers.
  • Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality.
  • Improved methods for troubleshooting the service code.

4.3. Configuring transport protocols

You can use iSCSI, Fibre Channel, NVMe-TCP, NFS, and Red Hat Ceph Storage RBD transport protocols with the Block Storage service (cinder). Control plane services that use volumes might require iscsid and multipathd modules on RHOCP cluster nodes, configured by using a MachineConfig CR.

Important

Using a MachineConfig CR to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a MachineConfig CR to ensure the integrity of RHOCP workloads.

For more information about MachineConfig, see Understanding the Machine Config operator. The procedures in this section provide a general configuration of these protocols and are not vendor-specific. If your deployment requires multipathing, see Configuring multipathing.

Note

The Block Storage volume and backup services are automatically started on data plane nodes.

Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig to the applicable nodes to configure nodes to use the iSCSI protocol.

Note

If the iscsid service module is already running, this procedure is not required.

Procedure

  1. Create a MachineConfig CR to configure the nodes for the iscsid module.

    The following example starts the iscsid service with a default configuration in all RHOCP worker nodes:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-enable-iscsid
    spec:
      config:
        ignition:
          version: 3.2.0
        systemd:
          units:
          - enabled: true
            name: iscsid.service
  2. Save the file.
  3. Apply the MachineConfig CR file.

    $ oc apply -f <machine_config_file> -n openstack
    • Replace <machine_config_file> with the name of your MachineConfig CR file.

4.3.2. Configuring the Fibre Channel protocol

There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end.

Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme kernel modules.

Procedure

  1. Create a MachineConfig CR to configure the nodes for the nvme kernel modules.

    The following example starts the nvme kernel modules with a default configuration in all RHOCP worker nodes:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-load-nvme-fabrics
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/modules-load.d/nvme_fabrics.conf
              overwrite: false
              mode: 420
              user:
                name: root
              group:
                name: root
              contents:
                source: data:,nvme-fabrics%0Anvme-tcp
  2. Save the file.
  3. Apply the MachineConfig CR file.

    $ oc apply -f <machine_config_file> -n openstack
    • Replace <machine_config_file> with the name of your MachineConfig CR file.
  4. After the nodes have rebooted, verify the nvme-fabrics module are loaded and support ANA on a host:

    cat /sys/module/nvme_core/parameters/multipath
    Note

    Even though ANA does not use the Linux Multipathing Device Mapper, multipathd must be running for the Compute nodes to be able to use multipathing when connecting volumes to instances.

4.4. LVM device management

When you use Logical Volume Management (LVM) with Block Storage service (cinder) back ends, Red Hat OpenStack Services on OpenShift (RHOSO) automatically enables device filtering through the RHEL system.devices file. LVM device filtering prevents Block Storage service volumes from being scanned by LVM on data plane nodes.

For more information about the RHEL system.devices file, see The LVM devices file in the RHEL documentation for Configuring and managing logical volumes.

Configure multipathing in Red Hat OpenStack Services on OpenShift (RHOSO) to create redundancy or improve performance. Control plane nodes require a MachineConfig CR. Data plane nodes have default multipath configuration, but you must add vendor-specific parameters for production environments.

You can configure multipathing on Red Hat OpenShift Container Platform (RHOCP) control plane nodes by creating a MachineConfig custom resource (CR) that creates a multipath configuration file and starts the service.

In Red Hat OpenStack Services on OpenShift (RHOSO) deployments, the use_multipath_for_image_xfer configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.

The example in this procedure implements a minimal multipath configuration file, which configures the default multipath parameters. However, your production deployment might also require vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment.

If you have a complex multipath configuration, you can use the Butane command-line utility to create a multipath configuration file for you. For more information, see Creating machine configs with Butane in RHOCP Installation configuration.

Procedure

  1. Create a MachineConfig CR to create a multipath configuration file and to start the multipathd module on all control plane nodes.

    The following example creates a MachineConfig CR named 99-worker-cinder-enable-multipathd that implements a multipath configuration file named multipath.conf:

    Important

    When adding vendor-specific multipath parameters to the contents: of this file, ensure that you do not change the specified values of the following default multipath parameters: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-enable-multipathd
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/multipath.conf
              overwrite: false
              mode: 384
              user:
                name: root
              group:
                name: root
              contents:
                source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D
        systemd:
          units:
          - enabled: true
            name: multipathd.service
    • The contents: data represents the following literal multipath.conf file contents:

      defaults {
        user_friendly_names no
        recheck_wwid yes
        skip_kpartx yes
        find_multipaths yes
      }
      
      blacklist {
      }
  2. Save the MachineConfig CR file, for example, 99-worker-cinder-enable-multipathd.yaml.
  3. Apply the MachineConfig CR file.

    $ oc apply -f 99-worker-cinder-enable-multipathd.yaml -n openstack

Default multipath parameters are configured on all data plane nodes. You must add and configure any vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment, to create your custom multipath configuration file.

Important

Ensure that you do not add the following default multipath parameters and overwrite their values: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.

You must modify the relevant OpenStackDataPlaneNodeSet custom resource (CR), to update the data plane node configuration to include your vendor-specific multipath parameters. You create an OpenStackDataPlaneDeployment CR that deploys and applies the modified OpenStackDataPlaneNodeSet CR to the data plane.

Prerequisites

  • You have created your custom multipath configuration file that contains only the vendor-specific multipath parameters and your deployment-specific values.

Procedure

  1. Create a secret to store your custom multipath configuration file:

    $ oc create secret generic <secret_name> \
    --from-file=<configuration_file_name>
    • Replace <secret_name> with the name that you want to assign to the secret, for example, custom-multipath-file.
    • Replace <configuration_file_name> with the name of the custom multipath configuration file that you created, for example, custom_multipath.conf.
  2. Open the OpenStackDataPlaneNodeSet CR file for the node set that you want to update, for example, openstack_data_plane.yaml.
  3. Add an extraMounts attribute to the OpenStackDataPlaneNodeSet CR file to include your vendor-specific multipath parameters:

    spec:
        ...
        nodeTemplate:
            ...
            extraMounts:
            - extraVolType: <optional_volume_type_description>
              volumes:
              - name: <mounted_volume_name>
                secret:
                  secretName: <secret_name>
              mounts:
              - name: <mounted_volume_name>
                mountPath: "/runner/multipath"
                readOnly: true
    • Optional: Replace <optional_volume_type_description> with a description of the type of the mounted volume, for example, multipath-config-file.
    • Replace <mounted_volume_name> with the name of the mounted volume, for example, custom-multipath.

      Note

      Do not change the value of the mountPath: parameter from "/runner/multipath".

  4. Save the OpenStackDataPlaneNodeSet CR file.
  5. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
  6. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady, the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  7. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character, for example, openstack_data_plane_deploy.
  8. Add the OpenStackDataPlaneNodeSet CR that you modified:

    spec:
      nodeSets:
        - <nodeSet_name>
  9. Save the OpenStackDataPlaneDeployment CR deployment file, for example, openstack_data_plane_deploy.yaml.
  10. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit

Verification

  • Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready

    For information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information about troubleshooting the deployment, see Troubleshooting the data plane creation and deployment in Deploying Red Hat OpenStack Services on OpenShift.

4.6. Configuring initial defaults

The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig section. Once deployed, these initial defaults are modified using the openstack client.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the Block Storage service global configuration.

    The following example demonstrates a Block Storage service initial configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        enabled: true
        template:
          customServiceConfig: |
            [DEFAULT]
            quota_volumes = 20
            quota_snapshots = 15

    For a complete list of all initial default parameters, see Initial default parameters.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.6.1. Initial default parameters

These initial default parameters should be configured when the service is first enabled.

Expand
ParameterDescription

default_volume_type

Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is __DEFAULT__.

no_snapshot_gb_quota

Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is false, which means that the size of the snapshots are included in the gigabyle quota.

per_volume_size_limit

Provides the maximum size of each volume in gigabytes. The default is -1 (unlimited).

quota_volumes

Provides the number of volumes allowed for each project. The default value is 10.

quota_snapshots

Provides the number of snapshots allowed for each project. The default value is 10.

quota_groups

Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is 10.

quota_gigabytes

Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the no_snapshot_gb_quota initial parameter this might also include the size of the snapshots. The default values, also count the size of the snapshots against this limit of 1000 GB.

quota_backups

Provides the number backups allowed for each project. The default value is 10.

quota_backup_gigabytes

Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is 1000.

4.7. Configuring the API service

The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other OpenStack services. Red Hat OpenStack Services on OpenShift (RHOSO) supports Block Storage REST API version 3.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer.

    The following example demonstrates a load balancer configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
  3. Edit the CR file and add the configuration for the number of API service replicas. Run the cinderAPI service in an Active-Active configuration with three replicas.

    The following example demonstrates configuring the cinderAPI service to use three replicas:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderAPI:
            replicas: 3
  4. Edit the CR file and configure cinderAPI options. These options are configured in the customServiceConfig section under the cinderAPI section.

    The following example demonstrates configuring cinderAPI service options and enabling debugging on all services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug = true
          cinderAPI:
            customServiceConfig: |
              [DEFAULT]
              osapi_volume_workers = 3

    For a listing of commonly used cinderAPI service option parameters, see API service option parameters.

  5. Save the file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.7.1. Block Storage API service option parameters

API service option parameters are provided for the configuration of the cinderAPI portions of the Block Storage service.

Expand
ParameterDescription

api_rate_limit

Provides a value to determine if the API rate limit is enabled. The default is false.

debug

Provides a value to determine whether the logging level is set to DEBUG instead of the default of INFO. The default is false. The logging level can be dynamically set without restarting.

osapi_max_limit

Provides a value for the maximum number of items a collection resource returns in a single response. The default is 1000.

osapi_volume_workers

Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available.

The Block Storage service (cinder) has a scheduler service (cinderScheduler) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations.

Use only a single instance of cinderScheduler for scheduling consistency and ease of troubleshooting. While cinderScheduler can be run with multiple instances, the service default replicas: 1 is the best practice.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the service down detection timeouts.

    The following example demonstrates this configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            report_interval = 20
            service_down_time = 120
    • report_interval: The number of seconds between Block Storage service components reporting an operational state in the form of a heartbeat through the database. The default is 10.
    • service_down_time: The maximum number of seconds since the last heartbeat from the component for it to be considered non-operational. The default is 60.

      Note

      Configure these values at the cinder level of the CR instead of the cinderScheduler so that these values are applied to all components consistently.

  3. Edit the CR file and add the configuration for the statistics reporting interval.

    The following example demonstrates configuring these values at the cinder level to apply them globally to all services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            backend_stats_polling_interval = 120
            backup_driver_stats_polling_interval = 120
    • backend_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from the back end. The default is 60.
    • backup_driver_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from backup service. The default is 60.

      The following example demonstrates configuring these values at the cinderVolume and cinderBackup level to customize settings at the service level.

      apiVersion: core.openstack.org/v1beta1
      kind: OpenStackControlPlane
      metadata:
        name: openstack
      spec:
        cinder:
          template:
            cinderBackup:
              customServiceConfig: |
                [DEFAULT]
                backup_driver_stats_polling_interval = 120
                < rest of the config >
            cinderVolumes:
              nfs:
                customServiceConfig: |
                  [DEFAULT]
                  backend_stats_polling_interval = 120
      Note

      The generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends.

  4. Perform any additional configuration necessary to customize the cinderScheduler service.

    For more configuration options for the customization of the cinderScheduler service, see Scheduler service parameters.

  5. Save the file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.8.1. Scheduler service parameters

Scheduler service parameters are provided for the configuration of the cinderScheduler portions of the Block Storage service

Expand
ParameterDescription

debug

Provides a setting for the logging level. When this parameter is true the logging level is set to DEBUG instead of INFO. The default is false.

scheduler_max_attempts

Provides a setting for the maximum number of attempts to schedule a volume. The default is 3

scheduler_default_filters

Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter.

scheduler_default_weighers

Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is CapacityWeigher.

scheduler_weight_handler

Provides a setting for a handler to use for selecting the host or pool after weighing. The value cinder.scheduler.weights.OrderedHostWeightHandler selects the first host from the list of hosts that passed filtering and the value cinder.scheduler.weights.stochastic.stochasticHostWeightHandler gives every pool a chance to be chosen where the probability is proportional to each pool weight. The default is cinder.scheduler.weights.OrderedHostWeightHandler.

The following is an explanation of the filter class names from the parameter table:

  • AvailabilityZoneFilter

    • Filters out all back ends that do not meet the availability zone requirements of the requested volume.
  • CapacityFilter

    • Selects only back ends with enough space to accommodate the volume.
  • CapabilitiesFilter

    • Selects only back ends that can support any specified settings in the volume.
  • InstanceLocality

    • Configures clusters to use volumes local to the same node.

The Block Storage service (cinder) has a volume service (cinderVolumes section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots.

This service requires access to the storage back end (storage) and storage management (storageMgmt) networks in the networkAttachments of the OpenStackControlPlane CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access.

Volume service configuration is performed in the cinderVolumes section with parameters set in the customServiceConfig, customServiceConfigSecrets, networkAttachments, replicas, and the nodeSelector sections.

The volume service cannot have multiple replicas.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for your back end.

    The following example demonstrates the service configuration for a Red Hat Ceph Storage back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug = true
          cinderVolumes:
            ceph:
              networkAttachments:
              - storage
              customServiceConfig: |
                [ceph]
                volume_backend_name = ceph
                volume_driver = cinder.volume.drivers.rbd.RBDDriver
    • ceph: The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends.
    • networkAttachments: The configuration area for the back end network connections.
    • volume_backend_name: The name assigned to this back end.
    • volume_driver: The driver used to connect to this back end.

      For a list of commonly used volume service parameters, see Volume service parameters.

  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.9.1. Volume service parameters

Volume service parameters are provided for the configuration of the cinderVolumes portions of the Block Storage service

Expand
ParameterDescription

backend_availability_zone

Provides a setting for the availability zone of the back end. This is set in the [DEFAULT] section. The default value is storage_availability_zone.

volume_backend_name

Provides a setting for the back end name for a given driver implementation. There is no default value.

volume_driver

Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value.

enabled_backends

Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a volume_backend_name option.

image_conversion_dir

Provides a setting for a directory used for temporary storage during image conversion. The default value is /var/lib/cinder/conversion.

backend_stats_polling_interval

Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is 60.

4.9.2. Block Storage service (cinder) back ends

Each Block Storage service back end should have an individual configuration section in the cinderVolumes section. This ensures each back end runs in a dedicated pod. This approach has the following benefits:

  • Increased isolation.
  • Adding and removing back ends is fast and does not affect other running back ends.
  • Configuration changes do not affect other running back ends.
  • Automatically spreads the Volume pods into different nodes.

Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols. Storage protocol information should also be provided in individual vendor installation guides.

Note

Configure each back end with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the best practice.

No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment.

All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog.

For more information about integrating and certifying vendor drivers, see Integrating partner content.

For information about Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a hyperconverged infrastructure environment.

For information about configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end.

Note

Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver.

Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes configuration section. Each back end runs in an independent pod.

The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      cinderVolumes:
        nfs:
          networkAttachments:
          - storage
          customServiceConfigSecrets:
          - cinder-volume-nfs-secrets
          customServiceConfig: |
        	[nfs]
        	volume_backend_name=nfs
        iSCSI:
          networkAttachments:
          - storage
          - storageMgmt
          customServiceConfig: |
        	[iscsi]
        	volume_backend_name=iscsi

4.10. Configuring back end availability zones

Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling.

For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware.

Note

Post-deployment, AZs are created by using the RESKEY:availability_zones volume type extra specification.

Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the AZ configuration.

    The following example demonstrates an AZ configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
    name: openstack
    spec:
      cinder:
        template:
          cinderVolumes:
            nfs:
              networkAttachments:
              - storage
              - storageMgmt
              customServiceConfigSecrets:
              - cinder-volume-nfs-secrets
              customServiceConfig: |
                    [nfs]
                    volume_backend_name=nfs
                    backend_availability_zone=zone1
            iSCSI:
              networkAttachments:
              - storage
              - storageMgmt
              customServiceConfig: |
                    [iscsi]
                    volume_backend_name=iscsi
                    backend_availability_zone=zone2
    • backend_availability_zone: The availability zone associated with the back end.
  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups.

Limitations
  • Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
  • For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS.
  • RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the customServiceConfig in the specific back-end configuration with the following parameters:

    nas_secure_file_operation=false
    nas_secure_file_permissions=false
  • Do not configure the nfs_mount_options option. The default value is the best NFS option for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.

Procedure

  1. Create a Secret CR to store the volume connection information.

    The following is an example of a Secret CR:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cinder-volume-nfs-secrets
    type: Opaque
    stringData:
      cinder-volume-nfs-secrets: |
    	[nfs]
    	nas_host=192.168.130.1
    	nas_share_path=/var/nfs/cinder

    where:

    name
    Is the name used when including it in the cinderVolumes back end configuration.
  2. Save the file.
  3. Update the control plane:

    $ oc apply -f <secret_file_name> -n openstack
    • Replace <secret_file_name> with the name of the file that contains your Secret CR.
  4. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  5. Edit the CR file and add the configuration for the generic NFS back end.

    The following example demonstrates this configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderVolumes:
            nfs:
              networkAttachments:
              - storage
              customServiceConfig: |
                [nfs]
                volume_backend_name=nfs
                volume_driver=cinder.volume.drivers.nfs.NfsDriver
                nfs_snapshot_support=true
                nas_secure_file_operations=false
                nas_secure_file_permissions=false
              customServiceConfigSecrets:
              - cinder-volume-nfs-secrets
    • The storageMgmt network is not listed because generic NFS does not have a management interface.
    • cinder-volume-nfs-secret: The name from the Secret CR.
    • If you are configuring multiple generic NFS back ends, ensure that each back end is in an individual configuration section so that one pod is dedicated to each back end.
  6. Save the file.
  7. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  8. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the directory for converting large Image service images.

    The following example demonstrates how to configure this conversion directory:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
          extraVol:
            - propagation:
              - CinderVolume
              volumes:
              - name: cinder-conversion
                nfs:
                    path: <nfs_share_path>
                    server: <nfs_server>
              mounts:
              - name: cinder-conversion
                mountPath: /var/lib/cinder/conversion
    ...
    • Replace <nfs_share_path> with the path to the conversion directory.

      Note

      The Block Storage volume service (cinder-volume) runs as the cinder user. The cinder user requires write permission for <nfs_share_path>. You can configure this by running the following command on the NFS server: $ chown 42407:42407 <nfs_share_path>.

    • Replace <nfs_server> with the IP address of the NFS server that hosts the conversion directory.
    Note

    This example demonstrates how to create a common conversion directory that all the volume service pods use.

    You can also define a conversion directory for each volume service pod:

    • Define each conversion directory by using an extraMounts section, as demonstrated above, in the cinder section of the OpenStackControlPlane CR file.
    • Set the propagation value to the name of the specific Volume section instead of CinderVolume.
  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.13. Configuring automatic database cleanup

The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources.

These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs.

Procedure

  1. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  2. Add the dbPurge parameter to the cinder template to configure database cleanup depending on the service you want to configure.

    The following is an example of using the dbPurge parameter to configure the Block Storage service:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          dbPurge:
            age: 20
            schedule: 1 0 * * 0
    • age: The number of days a record has been marked for deletion before it is purged. The default value is 30. The minimum value is 1.
    • schedule: When to run the job in a crontab format. The default value is 1 0 * * *. This default value is equivalent to 00:01 daily.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml

The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs.

If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob field in your OpenStackControlPlane CR to stop the automatic removal of jobs and preserve them.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      preserveJobs: true

Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN.

Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead.

These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0 * For backups: cinder-backup-<replica-number>

If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames field.

When you set the uniquePodNames field to true, a short hash is added to the pod names, which addresses hostname conflicts.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    uniquePodNames: true

4.16. Using other container images

Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed by using a container image for a specific release and version. Sometimes, a deployment requires a container image other than the one produced for that release and version.

The most common reasons for using a container image that is not for a specific release and version are:

  • Deploying a hotfix.
  • Using a certified, vendor-provided container image.

The container images used by the installer are controlled through the OpenStackVersion CR. An OpenStackVersion CR is automatically created by the openstack operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane CR but after the openstack operator is installed. This allows for the container image for any service and component to be individually designated.

The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI, cinderScheduler, and cinderBackup pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes.

The following example demonstrates a OpenStackControlPlane configuration with two back ends; one called ceph and one called custom-fc. The custom-fc backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      cinderVolumes:
        ceph:
          networkAttachments:
          - storage
< . . . >
        custom-fc:
          networkAttachments:
          - storage

The following example demonstrates what our OpenStackVersion CR might look like in order to set up the container images properly.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
  name: openstack
spec:
  customContainerImages:
    cinderAPIImages: <custom-api-image>
    cinderBackupImages: <custom-backup-image>
    cinderSchedulerImages: <custom-scheduler-image>
    cinderVolumeImages:
      custom-fc: <vendor-volume-volume-image>
  • Replace <custom-api-image> with the name of the API service image to use.
  • Replace <custom-backup-image> with the name of the Backup service image to use.
  • Replace <custom-scheduler-image> with the name of the Scheduler service image to use.
  • Replace <vendor-volume-volume-image> with the name of the certified, vendor-provided image to use.
Note

The name attribute in your OpenStackVersion CR must match the same attribute in your OpenStackControlPlane CR.

You can use the optional backup service of the Block Storage service (cinder) to create and restore full or incremental backups of Block Storage volumes. Configure the backup service in the cinderBackup section of your OpenStackControlPlane CR.

5.1. Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.
  • You have enabled the backup service for the Block Storage service in your OpenStack Control Plane.

You can configure different storage back ends for Block Storage backups, including Red Hat Ceph Storage RBD, the Object Storage service (swift), NFS, and S3.

Red Hat Ceph Storage RBD is the default back end when you use Red Hat Ceph Storage. For more information, see Configuring the control plane to use the Red Hat Ceph Storage cluster.

For information about other back end options for backups, see OSP18 Cinder Alternative Storage.

You can use the backup service to back up volumes that are on any back end that the Block Storage service (cinder) supports, regardless of which back end you choose to use for backups. You can only configure one back end for backups, whereas you can configure multiple back ends for volumes.

Back ends for backups do not have transport protocol requirements for the RHOCP node. However, the backup pods need to connect to the volumes, and the back ends for volumes have transport protocol requirements.

5.3. Setting the number of replicas for backups

You can run multiple instances of the Block Storage backup component in active-active mode by setting replicas to a value greater than 1. The default value is 0.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to set the number of replicas for the cinderBackup parameter:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
       …
       cinder:
          template:
            cinderBackup: |
              replicas: <number_of_replicas>
    ...
    • Replace <number_of_replicas> with a value greater than 1.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

Some features of the Block Storage backup service like incremental backups, the creation of backups from snapshots, and data compression can reduce the performance of backup operations.

By only capturing the periodic changes to volumes, incremental backup operations can minimize resource usage. However, incremental backup operations have a lower performance than full backup operations. When you create an incremental backup, all of the data in the volume must first be read and compared with the data in both the full backup and each subsequent incremental backup.

Some back ends for volumes support the creation of a backup from a snapshot by directly attaching the snapshot to the backup host, which is faster than cloning the snapshot into a volume. If the back end you use for volumes does not support this feature, you can create a volume from a snapshot and use the volume as backup. However, the extra step of creating the volume from a snapshot can affect the performance of the backup operation.

You can configure the Block Storage backup service to enable or disable data compression of the storage back end for your backups. If you enable data compression, backup operations require additional CPU power, but they use less network bandwidth and storage space overall.

Note

You cannot use data compression with a Red Hat Ceph Storage back end.

The cinderBackup parameter inherits the configuration from the top level customServiceConfig section of the cinder template in your OpenStackControlPlane CR. However, the cinderBackup parameter also has its own customServiceConfig section.

The following table describes configuration options that apply to all back-end drivers.

Expand
Table 5.1. Configuration options for backup drivers
OptionDescriptionValue typeDefault value

debug

When set to true, the logging level is set to DEBUG instead of the default INFO level. You can also set debug log levels for the scheduler dynamically without a restart by using the dynamic log level API functionality.

Boolean

false

backup_service_inithost_offload

Offload pending backup delete during backup service startup. If set to false, the backup service remains down until all pending backups are deleted.

Boolean

true

storage_availability_zone

Availability zone of the backup service.

String

nova

backup_workers

Number of processes to launch in the backup pod. Improves performance with concurrent backups.

Integer

1

backup_max_operations

Maximum number of concurrent memory, and possibly CPU, heavy operations (backup and restore) that can be executed on each pod. The number limits all workers within a pod but not across pods. Value of 0 means unlimited.

Integer

15

backup_native_threads_pool_size

Size of the native threads pool used for backup data-related operations. Most backup drivers rely heavily on this option, and you can increase the value for specific drivers that do not rely on it.

Integer

60

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to set configuration options. In this example, you enable debug logs, double the number of processes, and increase the maximum number of operations per pod to 20.

    Example:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
       …
       cinder:
          template:
            customServiceConfig: |
              [DEFAULT]
              debug = true
            cinderBackup:
              customServiceConfig: |
               [DEFAULT]
               backup_workers = 2
               backup_max_operations = 20
    
    ...
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

5.6. Enabling data compression for volume backups

Select a compression algorithm for your backups to reduce storage space and network bandwidth usage. Backups use zlib compression by default, but you can change algorithms or disable compression.

Data compression requires additional CPU power but uses less network bandwidth and storage space.

You can change the data compression algorithm of your backups or disable data compression by using the backup_compression_algorithm parameter in your OpenStackControlPlane CR.

The following options are available for data compression.

Expand
Table 5.2. Data compression options

Option

Description

none, off, or no

Do not use compression.

zlib or gzip

Use the Deflate compression algorithm.

bz2z or bzip2

Use Burrows-Wheeler transform compression.

zstd

Use the Zstandard compression algorithm.

Note

You cannot specify the data compression algorithm for the Red Hat Ceph Storage back end driver.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameter to the cinder template to enable data compression. In this example, you enable data compression with an Object Storage service (swift) back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver
              backup_compression_algorithm = zstd
            networkAttachments:
            - storage
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure Red Hat Ceph Storage RADOS Block Device (RBD) as the storage back end for your Block Storage backups. RBD provides efficient incremental backups when combined with Ceph RBD volumes.

For more information about Ceph RBD, see Configuring the control plane to use the Red Hat Ceph Storage cluster.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to configure Ceph RBD as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.CephBackupDriver
              backup_ceph_pool = backups
              backup_ceph_user = openstack
            networkAttachments:
            - storage
            replicas: 1
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure the Object Storage service (swift) as the storage back end for your Block Storage backups. The Object Storage service provides scalable object storage with customizable containers for backup data.

Prerequisites

  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
  • Verify that the Object Storage service is active in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.

The default container for Object Storage service back ends is volumebackups. You can change the default container by using the backup_swift_container configuration option.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to configure the Object Storage service as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver
            networkAttachments:
            - storage
            replicas: 1
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure NFS as the storage back end for your Block Storage backups. NFS provides network-accessible file storage with flexible mount options for backup data.

Prerequisites

Procedure

  1. Create a secret CR file, for example, cinder-backup-nfs-secrets.yaml, and add the following configuration for your NFS share:

    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        service: cinder
        component: cinder-backup
      name: cinder-backup-nfs-secrets
    type: Opaque
    stringData:
      nfs-secrets.conf: |
        [DEFAULT]
        backup_share = <192.168.1.2:/Backups>
        backup_mount_options = <optional>
    • Replace <192.168.1.2:/Backups> with the IP address of your NFS share.
    • Replace <optional> with the mount options for your NFS share.
  2. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to add the secret for the NFS share and configure NFS as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.nfs.NFSBackupDriver
            customServiceConfigSecrets:
            - cinder-backup-nfs-secrets
            networkAttachments:
            - storage
            replicas: 1
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure the Block Storage service (cinder) backup service with S3 as the storage back end.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the cinder template to configure S3 as the back end for backups:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      cinder:
        template:
          cinderBackup
            customServiceConfig: |
              [DEFAULT]
              backup_driver = cinder.backup.drivers.s3.S3BackupDriver
              backup_s3_endpoint_url = <user supplied>
              backup_s3_store_access_key = <user supplied>
              backup_s3_store_secret_key = <user supplied>
              backup_s3_store_bucket = volumebackups
              backup_s3_ca_cert_file = /etc/pki/tls/certs/ca-bundle.crt
            networkAttachments:
            - storage
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

When you create a backup of a Block Storage volume, the metadata for this backup is stored in the Block Storage service database. The Block Storage backup service uses this metadata when it restores the volume from the backup.

Important

To ensure that a backup survives a catastrophic loss of the Block Storage service database, you can manually export and store the metadata of this backup. After a catastrophic database loss, you need to create a new Block Storage database and then manually re-import this backup metadata into it.

Chapter 6. Configuring the Image service (glance)

The Image service (glance) provides discovery, registration, and delivery services for disk and server images. Use stored images as templates to commission servers. Supported back ends include RADOS Block Device (RBD), the Block Storage service (cinder), Object Storage service (swift), S3, and NFS.

You can configure the following back ends as stores for the Image service:

  • RBD is the default back end when you use Red Hat Ceph Storage.
  • RBD multistore. You can use multiple stores only with distributed edge architecture or distributed zones so that you can have an image pool at every edge site or zone.
  • Block Storage service.
  • Block Storage service multistore. You can use multiple stores only with distributed zones so that you can have an image pool in every zone.
  • Object Storage service.
  • S3.
  • NFS.

For more information about Red Hat Ceph Storage, distributed edge architecture, and distributed zones, see the following documentation:

6.1. Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end.

Prerequisites

  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
  • Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API (glanceAPI) is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure the Block Storage service as the back end for the Image service:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:cinder
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            description = Default cinder backend
            cinder_store_auth_address = {{ .KeystoneInternalURL }}
            cinder_store_user_name = {{ .ServiceUser }}
            cinder_store_password = {{ .ServicePassword }}
            cinder_store_project_name = service
            cinder_catalog_info = volumev3::internalURL
            cinder_use_multipath = true
            [oslo_concurrency]
            lock_path = /var/lib/glance/tmp
    ...
    • Set replicas to 3 for high availability across APIs.
    • Replace <backend_name> with the name of the default cinder back end, for example nfs_store.
    • The /var/lib/glance/tmp directory is where lock files used by oslo.concurrency are stored to coordinate concurrent access to shared resources.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.

When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.

Note

By default, only the Block Storage project administrator can create volume types.

Procedure

  1. Access the remote shell for the openstackclient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:

    $ openstack volume type create glance-multiattach
    $ openstack volume type set --property multiattach="<is> True"  glance-multiattach

    If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the volume_backend_name property to this volume type. You might need to ask your Block Storage administrator for the correct volume_backend_name for your multi-attach volume type. For this example, we are using iscsi as the back-end name.

    $ openstack volume type set glance-multiattach --property volume_backend_name=iscsi
  3. Exit the openstackclient pod:

    $ exit
  4. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml. In the glance template, add the following parameter to the end of the customServiceConfig, [<backend_name>] section to configure the Image service to use the Block Storage multi-attach volume type:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          ...
          customServiceConfig: |
          ...
          [<backend_name>]
          ...
            cinder_volume_type = glance-multiattach
    ...
    • Replace <backend_name> with the name of the default back end.
  5. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  6. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can add the following parameters to the end of the customServiceConfig, [<backend_name>] section of the glance template in your OpenStackControlPlane CR file.

Expand
Table 6.1. Block Storage back-end parameters for the Image service
Parameter = Default valueTypeDescription of use

cinder_use_multipath = False

boolean value

Set to True when multipath is supported for your deployment.

cinder_enforce_multipath = False

boolean value

Set to True to abort the attachment of volumes for image transfer when multipath is not running.

cinder_mount_point_base = /var/lib/glance/mnt

string value

Specify a string representing the absolute path of the mount point, the directory where the Image service mounts the NFS share.

Note

This parameter is only applicable when using an NFS Block Storage back end for the Image service.

cinder_do_extend_attached = False

boolean value

Set to True when the images are > 1 GB to optimize the Block Storage process of creating the required volume sizes for each image.

The Block Storage service creates an initial 1 GB volume and extends the volume size in 1 GB increments until it contains the data of the entire image. When this parameter is either not added or set to False, the incremental process of extending the volume is very time-consuming, requiring the volume to be subsequently detached, extended by 1 GB if it is still smaller than the image size and then reattached. By setting this parameter to True, this process is optimized by performing these consecutive 1 GB volume extensions while the volume is attached.

Note

This parameter requires your Block Storage back end to support the extension of attached (in-use) volumes. See your back-end driver documentation for information on which features are supported.

cinder_volume_type = __DEFAULT__

string value

Specify the name of the Block Storage volume type that can be optimized for creating volumes for images. For example, you can create a volume type that enables the creation of multiple instances or volumes from a volume-backed image. For more information, see Creating a multi-attach volume type.

When this parameter is not used, volumes are created by using the default Block Storage volume type.

You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure the Object Storage service as the back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:swift
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            swift_store_create_container_on_put = True
            swift_store_auth_version = 3
            swift_store_auth_address = {{ .KeystoneInternalURL }}
            swift_store_key = {{ .ServicePassword }}
            swift_store_user = service:glance
            swift_store_endpoint_type = internalURL
    ...
    • Set replicas to 3 for high availability across APIs.
    • Replace <backend_name> with the name of the default back end.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.4. Configuring an S3 back end

To configure the Image service (glance) with S3 as the storage back end, you require the following details:

  • S3 access key
  • S3 secret key
  • S3 endpoint

For security, these details are stored in a Kubernetes secret.

Prerequisites

Procedure

  1. Create a configuration file, for example, glance-s3.conf, where you can store the S3 configuration details.
  2. Generate the secret and access keys for your S3 storage.

    • If your S3 storage is provisioned by the Ceph Object Gateway (RGW), run the following command to generate the secret and access keys:

      $ radosgw-admin user create --uid="<user_1>" \
      --display-name="<Jane Doe>"
      • Replace <user_1> with the user ID.
      • Replace <Jane Doe> with a display name for the user.
    • If your S3 storage is provisioned by the Object Storage service (swift), run the following command to generate the secret and access keys:

      $ openstackclient openstack credential create --type ec2 \
      --project admin admin \
      '{"access": "<access_key>", "secret": "<secret_key>"}'
      • Replace <access_key> with the user ID.
      • Replace <secret_key> with a display name for the user.
  3. Add the S3 configuration details to your glance-s3.conf configuration file:

    [default_backend]
    s3_store_host = <_s3_endpoint_>
    s3_store_access_key = <_s3_access_key_>
    s3_store_secret_key = <_s3_secret_key_>
    s3_store_bucket = <_s3_bucket_>
    • Replace <_s3_endpoint_> with the host where the S3 server is listening. This option can contain a DNS name, for example, _s3.amazonaws.com_, or an IP address.
    • Replace <_s3_access_key_> and <_s3_secret_key_> with the data generated by the S3 back end.
    • Replace <_s3_bucket_> with the bucket name where you want to store images in the S3 back end. If you set s3_store_create_bucket_on_put to True in your OpenStackControlPlane CR file, the bucket name is created automatically, even if the bucket does not already exist.
  4. Create a secret from the glance-s3.conf file:

    $ oc create secret generic glances3  \
    --from-file s3glance.conf
  5. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the glance template to configure S3 as the back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      glance:
        template:
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:s3
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            s3_store_create_bucket_on_put = True
            s3_store_bucket_url_format = "path"
            s3_store_cacert = "/etc/pki/tls/certs/ca-bundle.crt
            s3_store_large_object_size = 0
          glanceAPIs:
            default:
              customServiceConfigSecrets:
              - glances3
    ...
    • Replace <backend_name> with the name of the default back end.
    • Optional: If your S3 storage is accessed by HTTPS, you must set the s3_store_cacert field and point it to the ca-bundle.crt path. The OpenStack control plane is deployed by default with TLS enabled, and a CA certificate is mounted to the pod in /etc/pki/tls/certs/ca-bundle.crt.
    • Optional: Set s3_store_large_object_size to 0 to force multipart upload when you create an image in the S3 back end from a Block Storage service (cinder) volume.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.5. Configuring an NFS back end

You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share.

If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk:

  • Use a reliable production-grade NFS back end.
  • Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a NetworkAttachmentDefinition custom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server.
  • Set export permissions. Write permissions must be present in the shared file system that you use as a store.
Limitations
  • In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways:

    • Set server-side mount options.
    • Use /etc/nfsmount.conf.
    • Mount NFS volumes by using PersistentVolumes, which have mount options.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the extraMounts parameter in the spec section to add the export path and IP address of the NFS share. The path is mapped to /var/lib/glance/images, where the Image service API (glanceAPI) stores and retrieves images:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    ...
    spec:
      extraMounts:
      - extraVol:
        - extraVolType: Nfs
          mounts:
          - mountPath: /var/lib/glance/images
            name: nfs
          propagation:
          - Glance
          volumes:
          - name: nfs
            nfs:
              path: <nfs_export_path>
              server: <nfs_ip_address>
        name: r1
        region: r1
    ...
    • Replace <nfs_export_path> with the export path of your NFS share.
    • Replace <nfs_ip_address> with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service.
  2. Add the following parameters to the glance template to configure NFS as the back end:

    ...
    spec:
      extraMounts:
      ...
      glance:
        template:
          glanceAPIs:
            default:
              type: single
              replicas: 3 # Configure back end; set to 3 when deploying service
          ...
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:file
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            filesystem_store_datadir = /var/lib/glance/images
          databaseInstance: openstack
    ...
    • Set replicas to 3 for high availability across APIs.
    • Replace <backend_name> with the name of the default back end.

      Note

      When you configure an NFS back end, you must set the type to single. By default, the Image service has a split deployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. The split deployment type is invalid for a file back end because different pods access the same file share.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure the Image service (glance) with multiple storage back ends.

To configure multiple back ends for a single Image service API (glanceAPI) instance, you set the enabled_backends parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid:

  • file
  • http
  • rbd
  • swift
  • cinder
  • s3

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the parameters to the glance template to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store:

    ...
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug=True
            enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift
    ...
  2. Specify the back end to use as the default back end. In the following example, the default back end is ceph-1:

    ...
          customServiceConfig: |
            [DEFAULT]
            debug=True
            enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift
            [glance_store]
            default_backend = ceph-1
    ...
  3. Add the configuration for each back end type you want to use:

    • Add the configuration for the first Ceph RBD store, ceph-0:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              rbd_store_ceph_conf = /etc/ceph/ceph-0.conf
              store_description = "RBD backend"
              rbd_store_pool = images
              rbd_store_user = openstack
      ...
    • Add the configuration for the second Ceph RBD store, ceph-1:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              ...
              [ceph-1]
              rbd_store_ceph_conf = /etc/ceph/ceph-1.conf
              store_description = "RBD backend 1"
              rbd_store_pool = images
              rbd_store_user = openstack
      ...
    • Add the configuration for the Object Storage service store, swift-0:

      ...
            customServiceConfig: |
              [DEFAULT]
              ...
              [ceph-0]
              ...
              [ceph-1]
              ...
              [swift-0]
              swift_store_create_container_on_put = True
              swift_store_auth_version = 3
              swift_store_auth_address = {{ .KeystoneInternalURL }}
              swift_store_key = {{ .ServicePassword }}
              swift_store_user = service:glance
              swift_store_endpoint_type = internalURL
      ...
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can deploy multiple Image service API (glanceAPI) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI instances, they are orchestrated by the same glance-operator, but you can connect them to a single back end or to different back ends.

Multiple glanceAPI instances inherit the same configuration from the main customServiceConfig parameter in your OpenStackControlPlane CR file. You use the extraMounts parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters.

You can also deploy multiple glanceAPI instances in an availability zone (AZ) to serve different workloads in that AZ.

Note

You can only register one glanceAPI instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint parameter in your OpenStackControlPlane CR file.

For information about adding and decommissioning glanceAPIs, see Adding an Image service API and Decommisioning an Image service API in Customizing persistent storage.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the glanceAPIs parameter to the glance template to configure multiple glanceAPI instances. In the following example, you create three glanceAPI instances that are named api0, api1, and api2:

    ...
    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            enabled_backends = <backend_name>:rbd
            [glance_store]
            default_backend = <backend_name>
            [<backend_name>]
            rbd_store_ceph_conf = /etc/ceph/ceph.conf
            store_description = "RBD backend"
            rbd_store_pool = images
            rbd_store_user = openstack
          databaseInstance: openstack
          databaseUser: glance
          keystoneEndpoint: api0
          glanceAPIs:
            api0:
              replicas: 1
            api1:
              replicas: 1
            api2:
              replicas: 1
        ...
    • Replace <backend_name> with the name of the default back end.
    • api0 is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations.
    • api1 and api2 are not default endpoints, but they are active APIs that users can use for image uploads by specifying the --os-image-url parameter when they upload an image.
    • You can update the keystoneEndpoint parameter to change the default endpoint in the Keystone catalog.
  2. Add the extraMounts parameter to connect the three glanceAPI instances to a different back end. In the following example, you connect api0, api1, and api2 to three different Ceph Storage clusters that are named ceph0, ceph1, and ceph2:

    spec:
      glance:
        template:
          customServiceConfig: |
            [DEFAULT]
            ...
          extraMounts:
            - name: api0
              region: r1
              extraVol:
                - propagation:
                  - api0
                  volumes:
                  - name: ceph0
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph0
                    mountPath: "/etc/ceph"
                    readOnly: true
            - name: api1
              region: r1
              extraVol:
                - propagation:
                  - api1
                  volumes:
                  - name: ceph1
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph1
                    mountPath: "/etc/ceph"
                    readOnly: true
            - name: api2
              region: r1
              extraVol:
                - propagation:
                  - api2
                  volumes:
                  - name: ceph2
                    secret:
                      secretName: <secret_name>
                  mounts:
                  - name: ceph2
                    mountPath: "/etc/ceph"
                    readOnly: true
    ...
    • Replace <secret_name> with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specific glanceAPI, for example, ceph-conf-files-0 for the ceph0 cluster.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

6.8. Split and single Image service API layouts

By default, the Image service (glance) has a split deployment type:

  • An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone)
  • An internal API service, which is accessible only through the internal endpoint for the Identity service

The split deployment type is invalid for an NFS or file back end because different pods access the same file share. When you configure an NFS or file back end, you must set the type to single in your OpenStackControlPlane CR.

Split layout example: In the following example of a split layout type in an edge deployment, two glanceAPI instances are deployed in an availability zone (AZ) to serve different workloads in that AZ.

...
spec:
  glance:
    template:
      customServiceConfig: |
        [DEFAULT]
...
  keystoneEndpoint: api0
  glanceAPIs:
    api0:
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = <backend_name>:rbd
      replicas: 1
      type: split
    api1:
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = <backend_name>:swift
      replicas: 1
      type: split
    ...
  • Replace <backend_name> with the name of the default back end.

Single layout example: In the following example of a single layout type in an NFS back-end configuration, different pods access the same file share:

...
spec:
  extraMounts:
    ...
    glance:
    template:
      glanceAPIs:
        default:
          type: single
          replicas: 3 # Configure back end; set to 3 when deploying service
      ...
      customServiceConfig: |
        [DEFAULT]
        enabled_backends = <backend_name>:file
        [glance_store]
        default_backend = <backend_name>
        [<backend_name>]
        filesystem_store_datadir = /var/lib/glance/images
      databaseInstance: openstack
      glanceAPIs:
...
  • Set replicas to 3 for high availability across APIs.
  • Replace <backend_name> with the name of the default back end.

6.9. Configuring multistore with edge architecture

When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites.

The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores.

With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see "Ceph block device layering" in the Red Hat Ceph Storage Block Device Guide:

When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch.

Refer to the following requirements to use images with edge sites:

  • A copy of each image must exist in the Image service at the central location.
  • You must copy images from an edge site to the central location before you can copy them to other edge sites.
  • You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage.
  • RBD must be the storage driver for the Image, Compute, and Block Storage services.

For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture.

Configure the Object Storage service (swift) to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.

OpenShift deployments are limited to one PV per node. However, the Object Storage service requires multiple PVs. To maximize availability and data durability, you create these PVs on different nodes, and only use one PV per node. External data plane nodes offer more flexibility for larger deployments with multiple disks per node.

For information about configuring the Object Storage service as an endpoint for the Red Hat Ceph Storage Object Gateway (RGW), see Configuring an external Ceph Object Gateway back end.

7.1. Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

You use at least two swiftProxy replicas and three swiftStorage replicas in a default Object Storage service (swift) deployment. You can increase these values to distribute storage across more nodes and disks.

The ringReplicas value defines the number of object copies in the cluster. For example, if you set ringReplicas: 3 and swiftStorage/replicas: 5, every object is stored on 3 different PersistentVolumes (PVs), and there are 5 PVs in total.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            ringReplicas: 3
          swiftStorage:
            replicas: 3
            storageClass: <swift-storage>
            storageRequest: 100Gi
    ...
    • Increase the swiftProxy/replicas: value to distribute proxy instances across more nodes.
    • Replace the ringReplicas: value to define the number of object copies you want in your cluster.
    • Increase the swiftStorage/replicas: value to define the number of PVs in your cluster.
    • Replace <swift-storage> with the name of the storage class you want the Object Storage service to use.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

If you operate large clusters with a lot of storage in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you can deploy the Object Storage service (swift) on external data plane nodes. With this configuration, the Object Storage proxy service continues to run on on the control plane and the Object Storage services run on the data plane nodes.

Note

If you do not want to use persistent volumes for data storage, set swiftStorage replicas to 0 in the OpenStackControlPlane CR. When initially creating the OpenStackControlPlane CR, you must also set swiftProxy replicas to 0. This is necessary because the proxies for the Object Storage service require properly built rings with at least the configured number of replica devices to start. Once the data plane is deployed, you can then scale the swiftProxy replicas to the number you want.

To deploy and run the Object Storage services on data plane nodes, first you enable DNS forwarding to resolve data plane host names in the control plane pods, and then you create an OpenStackDataPlaneNodeSet CR with the following properties:

  • The swift service
  • A list of disks to be used for Object Storage service storage

Procedure

  1. Enable DNS forwarding to resolve data plane hostnames in the control plane pods.

    1. Obtain the clusterIP of the resolver:

      $ oc get svc dnsmasq-dns -o jsonpath=`{.spec.clusterIP}`
    2. Update the default DNS entry to add the clusterIP of the resolver:

      apiVersion: operator.openshift.io/v1
      kind: DNS
      metadata:
        name: default
      spec:
        servers:
        - name: swift
          zones:
          - storage.example.com
          forwardPlugin:
            policy: Random
            upstreams:
            - <clusterIP>
      • Replace <clusterIP> with the clusterIP of the resolver.
  2. Enable the swift storage service on the data plane nodes by adding the swift service to the end of the list of services for the NodeSet in your OpenStackDataPlaneNodeSet CR. The service runs the playbooks that are required to configure the Object Storage services:

    Example:

        services:
        - repo-setup
        - bootstrap
        - download-cache
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - reboot-os
        - install-certs
        - swift
  3. Define disks to be used by the Object Storage service on data plane nodes.

    • When you define disks, you can do the following:

      • Define the disks in the global nodeTemplate section in your OpenStackDataPlaneNodeSet CR to use the same type of disks for all nodes.
      • Define disks on a per-node basis in the nodes section of your OpenStackDataPlaneNodeSet CR.
      • Assign disks to a specific region or zone.
      • Enable ring management to distribute replicas.
    • You must specify a weight for each disk. If you do not have custom weights in your existing rings, you can set the weight to the GiB capacity of the disk.

      The following example shows the OpenStackDataPlaneNodeSet CR for a data plane with three storage nodes. Each node is configured to use two disks in the nodeTemplate section. The first node edpm-swift-0 is configured to use a third disk in the nodes section:

      Example:

      - apiVersion: dataplane.openstack.org/v1beta1
        kind: OpenStackDataPlaneNodeSet
        metadata:
          name: openstack-edpm-ipam
          namespace: openstack
        spec:
          ...
          networkAttachments:
          - ctlplane
          - storage
          nodeTemplate:
            ansible:
              ansibleVars:
                edpm_swift_disks:
                - device: /dev/vdb
                  path: /srv/node/vdb
                  region: 0
                  weight: 4000
                  zone: 0
                - device: /dev/vdc
                  path: /srv/node/vdc
                  region: 0
                  weight: 4000
                  zone: 0
          nodes:
            edpm-swift-0:
              ansible:
                ansibleVars:
                  edpm_swift_disks:
                  - device: /dev/vdd
                    path: /srv/node/vdd
                    weight: 1000
              hostName: edpm-swift-0
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.100
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
            edpm-swift-1:
              hostName: edpm-swift-1
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.101
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
            edpm-swift-2:
              hostName: edpm-swift-2
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.102
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
          ...
          services:
          - repo-setup
          - bootstrap
          - download-cache
          - configure-network
          - validate-network
          - install-os
          - configure-os
          - ssh-known-hosts
          - run-os
          - reboot-os
          - install-certs
          - swift

7.4. Object Storage rings

The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.

Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.

The Object Storage service has three rings to store the following types of data:

  • Account information
  • Containers, to facilitate organizing objects under an account
  • Object replicas

7.5. Ring partition power

The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.

In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat Support.

When you use separate nodes for the Object Storage service (swift), use a higher partition power value.

The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.

You can only change the partition power parameter for new containers and their objects, so you must set this value before initial deployment.

The default partition power value is 10. Refer to the following table to select an appropriate partition power if you use three replicas:

Expand
Table 7.1. Appropriate partition power values per number of available disks

Partition Power

Maximum number of disks

10

~ 35

11

~ 75

12

~ 150

13

~ 250

14

~ 500

Important

Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and change the value for partPower under the swiftRing parameter in the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            partPower: 12
            ringReplicas: 3
    ...
    • Replace <12> with the value you want to set for partition power.

      Tip

      You can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.

When you deploy the Shared File Systems service (manila), you can choose one or more supported back ends, such as native CephFS, CephFS-NFS, NetApp, and others.

For a complete list of supported back-end appliances and drivers, see the Manila section of the Red Hat Knowledge Article, Component, Plug-In, and Driver Support in Red Hat OpenStack Platform.

8.1. Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane as a user with cluster-admin privileges.
  • You have planned networking for the Shared File Systems service. For more information, see Planning networking for the Shared File Systems service in Planning your deployment.
  • You have enabled the Shared File Systems service. For more information, see Enabling the Shared File Systems service.
  • For native CephFS and CephFS-NFS:

    • A CephFS file system must exist on the Red Hat Ceph Storage cluster.
    • A Ceph user must exist that has CephX capabilities (caps) to perform operations on the CephFS file system.

      For more information, see Integrating Red Hat Ceph Storage.

  • For CephFS-NFS only:

    • A ceph nfs service must exist in the Ceph Storage cluster.
    • You have created an isolated StorageNFS network for NFS exports.
    • You have created a corresponding StorageNFS shared provider network in the Networking service (neutron). The StorageNFS shared provider network maps to the isolated StorageNFS network of the data center.
    • The StorageNFS network isolates NFS traffic while allowing RHOSO users to attach their Compute instances to access shares.
    • Do not select a custom port to expose NFS. Only the default NFS port of 2049 is supported.
    • You must enable the Red Hat Ceph Storage ingress service and set the ingress-mode to haproxy-protocol. Otherwise, you cannot use IP-based access rules with the Shared File Systems service.
    • For production environments, do not provide access to 0.0.0.0/0 on shares to mount them on client machines.

8.2. Enabling the Shared File Systems service

You can enable the Shared File Systems service (manila) to provision remote, shareable file systems in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. These file systems are known as shares, and they allow projects in the cloud to share POSIX compliant storage. Shares can be mounted to multiple compute instances, baremetal computes, containers or pods of containers at the same time with read/write access mode.

When you enable the Shared File Systems service, you can configure the service with the following back ends:

  • Red Hat Ceph Storage CephFS
  • Red Hat Ceph Storage CephFS-NFS
  • NFS or CIFS through third party vendor storage systems

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the spec section to enable the Shared File Systems service:

    spec:
      ...
      manila:
        enabled: true
        apiOverride:
          route: {}
        template:
          databaseInstance: openstack
          secret: osp-secret
          manilaAPI:
            replicas: 3
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
          manilaScheduler:
            replicas: 3
          manilaShares:
            share1:
              networkAttachments:
              - storage
              replicas: 0 # backend needs to be configured
    Note

    You must configure a back end for the Shared File Systems service. If you do not configure a back end for the Shared File Systems service, then the service is deployed but not activated (replicas: 0).

  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure the Shared File Systems service (manila) with native CephFS as the storage back end.

Security considerations

You can expose a native CephFS back end to trusted users, but take the following security measures:

  • Configure the storage network as a provider network.
  • Apply role-based access control (RBAC) policies to secure the storage provider network.
  • Create a private share type.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the extraMounts parameter in the spec section to present the Ceph configuration files:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
        - name: v1
          region: r1
          extraVol:
            - propagation:
              - ManilaShare
              extraVolType: Ceph
              volumes:
              - name: ceph
                projected:
                  sources:
                  - secret:
                      name: <ceph-conf-files>
              mounts:
              - name: ceph
                mountPath: "/etc/ceph"
                readOnly: true
  2. Add the following parameters to the manila template to configure the native CephFS back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=cephfs
          manilaScheduler:
            replicas: 3
          manilaShares:
            cephfsnative:
              replicas: 1
              networkAttachments:
              - storage
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfs
                [cephfs]
                driver_handles_share_servers=False
                share_backend_name=cephfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_conf_path=/etc/ceph/ceph.conf
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_volume_mode=0755
                cephfs_protocol_helper_type=CEPHFS
    ...
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can configure the Shared File Systems service (manila) with CephFS-NFS as the storage back end.

Limitations
  • Use NFSv4.1 or later for Linux clients. NFSv3 is available for Microsoft Windows clients, but recovery is not expected for NFSv3 clients when a CephFS-NFS service fails over. Simultaneous access from Windows and Linux clients is not supported.
  • For active/active (A/A) CephFS-NFS deployments, reserve at least one standby node in your Ceph Storage cluster for successful failover. If a failover process cannot complete due to insufficient standby capacity, clients do not automatically transition to other active servers. For environments that require automated CephFS-NFS recovery without additional standby capacity, deploy in active/passive (A/P) mode instead.

Prerequisites

  • The isolated storage network is configured on the share manager pod on OpenShift so that the Shared File Systems service can communicate with the Red Hat Ceph Storage cluster.
  • Use an isolated NFS network for NFS traffic. This network does not need to be available to the share manager pod for the Shared File Systems service on OpenShift, but it must be available to Compute instances owned by end users.
  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=nfs
          manilaScheduler:
            replicas: 3
          manilaShares:
            share1:
              customServiceConfig: |
                [DEFAULT]
                enabled_share_backends=cephfsnfs
                [cephfsnfs]
                driver_handles_share_servers=False
                share_backend_name=cephfs
                share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                cephfs_auth_id=openstack
                cephfs_cluster_name=ceph
                cephfs_nfs_cluster_id=cephfs
                cephfs_protocol_helper_type=NFS
              networkAttachments:
              - storage
    ...
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

8.5. Configuring alternative back ends

To configure the Shared File Systems service (manila) with an alternative back end, for example, NetApp or Pure Storage, complete the following high level tasks:

  1. Create the server connection secret.
  2. Configure the OpenStackControlPlane CR to use the alternative storage system as the back end for the Shared File Systems service.

8.5.1. Prerequisites

  • You have prepared the alternative storage system for consumption by Red Hat OpenStack Services on OpenShift (RHOSO).
  • Network connectivity between the Red Hat OpenShift cluster, the Compute nodes, and the alternative storage system.

8.5.2. Creating the server connection secret

Create a server connection secret for an alternative back end to prevent placing server connection information directly in the OpenStackControlPlane CR.

Procedure

  1. Create a configuration file that contains the server connection information for your alternative back end. In this example, you are creating the secret for a NetApp back end.

    The following is an example of the contents of a configuration file:

    [netapp]
    netapp_server_hostname = <netapp_ip>
    netapp_login = <netapp_user>
    netapp_password = <netapp_password>
    netapp_vserver = <netappvserver>
    • Replace <netapp_ip> with the IP address of the server.
    • Replace <netapp_user> with the login user name.
    • Replace <netapp_password> with the login password.
    • Replace <netappvserver> with the vserver name. You do not need this option if configuring the driver_handles_share_servers=True mode.
  2. Save the configuration file.
  3. Create the secret based on the configuration file:

    $ oc create secret generic <secret_name> --from-file=<configuration_file_name>

    • Replace <secret_name> with the name you want to assign to the secret.
    • Replace <configuration_file_name> with the name of the configuration file you created.
  4. Delete the configuration file.

You can configure the Shared File Systems service (manila) with an alternative storage back end, for example, a NetApp back end.

Prerequisites

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=cifs
          manilaScheduler:
            replicas: 3
          manilaShares:
            share1:
              networkAttachments:
              - storage
              customServiceConfigSecrets:
              - manila_netapp_secret
              customServiceConfig: |
                [DEFAULT]
                debug = true
                enabled_share_backends=netapp
                [netapp]
                driver_handles_share_servers=False
                share_backend_name=netapp
                share_driver=manila.share.drivers.netapp.common.NetAppDriver
                netapp_storage_family=ontap_cluster
    ...
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

When you configure an alternative back end for the Shared File Systems service (manila), you might need to use additional configuration files. You can use the extraMounts parameter in your OpenStackControlPlane CR file to present these configuration files as OpenShift configMap or secret objects in the relevant share manager pod.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
...
  extraMounts:
    - name: v1
      region: r1
      extraVol:
        - propagation:
          - sharepod1
          extraVolType: Undefined
          volumes:
          - name: backendconfig
            projected:
              sources:
              - secret:
                  name: manila-sharepod1-secrets
          mounts:
          - name: backendconfig
            mountPath: /etc/manila/drivers
            readOnly: true
...

8.5.5. Custom storage driver container images

When you configure an alternative back end for the Shared File Systems service (manila), you might need to use a custom manilaShares container image from the vendor on the Red Hat Ecosystem Catalog.

You can add the path to the container image to your OpenStackVersion CR file with the customContainerImages parameter.

For more information, see Deploying partner container images in Integrating partner content.

You can deploy multiple back ends for the Shared File Systems service (manila), for example, a CephFS-NFS back end, a native CephFS back end, and a third-party back end. Add one back end only per pod.

Prerequisites

  • When you use a back-end driver from a storage vendor that requires external software components, you must override the standard container image for the Shared File Systems service during deployment. You can find custom container images, for example, the Dell EMC Unity container image for a Dell EMC Unity storage system, at Red Hat Ecosystem Catalog.
  • You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template to configure the back ends. In this example, there is a CephFS-NFS back end, a native CephFS back end, and a Pure Storage back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaAPI:
            replicas: 3
            customServiceConfig: |
              [DEFAULT]
              debug = true
              enabled_share_protocols=nfs,cephfs,cifs
          manilaScheduler:
            replicas: 3
        ...
  2. Add the configuration for each back end you want to use:

    • Add the configuration for the CephFS-NFS back end:

          ...
              customServiceConfig: |
              ...
              manilaShares:
                cephfsnfs:
                  networkAttachments:
                  - storage
                  customServiceConfig: |
                      [DEFAULT]
                      enabled_share_backends=cephfsnfs
                      [cephfsnfs]
                      driver_handles_share_servers=False
                      share_backend_name=cephfs
                      share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                      cephfs_auth_id=openstack
                      cephfs_cluster_name=ceph
                      cephfs_nfs_cluster_id=cephfs
                      cephfs_protocol_helper_type=NFS
                  replicas: 1
          ...
    • Add the configuration for the native CephFS back end:

          ...
              customServiceConfig: |
              ...
              manilaShares:
                cephfsnfs:
                ...
                cephfs:
                  networkAttachments:
                  - storage
                  customServiceConfig: |
                    [DEFAULT]
                    enabled_share_backends=cephfs
                    [cephfs]
                    driver_handles_share_servers=False
                    share_backend_name=cephfs
                    share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
                    cephfs_conf_path=/etc/ceph/ceph.conf
                    cephfs_auth_id=openstack
                    cephfs_protocol_helper_type=CEPHFS
                  replicas: 1
          ...
    • Add the configuration for the Pure Storage back end:

          ...
              customContainerImages:
                manilaShareImages:
                  pure: registry.connect.redhat.com/purestorage/openstack-manila-share-pure:rhoso18
              manilaShares:
                cephfsnfs:
                ...
                cephfs:
                ...
                pure:
                  networkAttachments:
                  - storage
                  customServiceConfigSecret: |
                  - manila-pure-secret
                  customServiceConfig: |
                    [DEFAULT]
                    debug = true
                    enabled_share_backends=pure
                    [pure]
                    driver_handles_share_servers=False
                    share_backend_name=pure
                    share_driver=manila.share.drivers.purestorage.flashblade.FlashBladeShareDriver
          ...
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

After a deployment or when troubleshooting issues, verify that the services for the Shared File Systems service (manila) are running and that they are up.

Verify that the manila pods are running. The number of pods depends on the number of replicas you have configured for the different components of the Shared File Systems service.

When you have verified that the pods are running, you can use the Shared File Systems service API to check the status of the services.

Procedure

  1. List the manila pods to verify that they are running:

    $ oc -n openstack get pod -l service=manila

    Example output:

    NAME                             READY   STATUS      RESTARTS          AGE
    manila-api-0                     2/2     Running     0                 43h
    manila-api-1                     2/2     Running     0                 43h
    manila-api-2                     2/2     Running     0                 43h
    manila-db-purge-28696321-tkl9g   0/1     Completed   0                 41h
    manila-db-purge-28697761-zxxzc   0/1     Completed   0                 17h
    manila-scheduler-0               2/2     Running     0                 43h
    manila-scheduler-1               2/2     Running     0                 43h
    manila-scheduler-2               2/2     Running     0                 43h
    manila-share-share1-0            2/2     Running     0                 43h
  2. Access the remote shell for the openstackclient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  3. Run the openstack share service list command:

    $ openstack share service list

    Example output:

    ----------------------------------------------------------------------------------------------- | ID | Binary | Host | Zone | Status | State | Updated At | ----------------------------------------------------------------------------------------------- | 1 | manila-scheduler | hostgroup | nova | enabled | up | 2024-07-25T17:40:27.323342 | | 4 | manila-share | hostgroup@cephfsnfs | nova | enabled | up | 2024-07-25T17:40:49.115386 | -----------------------------------------------------------------------------------------------

  4. Verify that the Status entry of every service is up. If not, examine the relevant log files.
  5. Exit the openstackclient pod:

    $ exit

Use the openstack share service list command to verify that the storage back ends for the Shared File Systems service (manila) deployed successfully. If you use a health check on multiple back ends, a ping test returns a response even if one of the back ends is unresponsive, so this is not a reliable way to verify your deployment.

Procedure

  1. Access the remote shell for the openstackclient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Confirm the list of Shared File Systems service back ends:

    $ openstack share service list

    The status of each successfully deployed back end shows as enabled and the state shows as up.

  3. Exit the openstackclient pod:

    $ exit

8.9. Creating availability zones for back ends

You can create availability zones (AZs) for Shared File Systems service back ends to group cloud infrastructure and services logically for users. Map the AZs to failure domains and compute resources for high availability, fault tolerance, and resource scheduling. For example, you can create an AZ of Compute nodes that have specific hardware that users can specify when they create an instance that requires that hardware.

Post deployment, use the availability_zones share type extra specification to limit share types to one or more AZs. Users can create a share directly in an AZ as long as the share type does not restrict them.

The example procedure deploys two back ends where CephFS is zone 1 and NetApp is zone 2.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
      manila:
        enabled: true
        template:
          manilaShares:
            cephfs:
              customServiceConfig: |
                [cephfs]
                backend_availability_zone = zone_1
              ...
            netapp:
              customServiceConfig: |
                [netapp]
                backend_availability_zone = zone_2
              ...
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

You can use the Shared File Systems service (manila) to export shares in the NFS, CephFS, or CIFS network attached storage (NAS) protocols. By default, the Shared File Systems service enables NFS and CIFS, and this may not be supported by the back ends in your deployment.

You can change the enabled_share_protocols parameter and list only the protocols that you want to allow in your cloud. For example, if back ends in your deployment support both NFS and CIFS, you can change the default value and enable only one protocol. The NAS protocols that you assign must be supported by the back ends in your Shared File Systems service deployment.

Not all storage back-end drivers support the CIFS protocol. For information about which certified storage systems support CIFS, see the Red Hat Ecosystem Catalog.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the manila template. In this example, you enable the NFS protocol:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
        ...
          manila:
            enabled: true
            template:
              manilaAPI:
                customServiceConfig: |
                  [DEFAULT]
                  enabled_share_protocols = NFS
              ...
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

The scheduler component of the Shared File Systems service (manila) makes intelligent placement decisions based on several factors such as capacity, provisioning configuration, placement hints, and the capabilities that the back-end storage system driver detects and exposes. You can use share types and extra specifications to modify placement decisions.

Procedure

  1. Access the remote shell for the openstackclient pod from your workstation:

    $ oc rsh -n openstack openstackclient
  2. Run the following command to view the available back-end storage capacity:

    $ openstack share pool list --detail
  3. Exit the openstackclient pod:

    $ exit

8.12. Configuring automatic database cleanup

The Shared File Systems (manila) service automatically purges database entries marked for deletion for a set number of days. By default, records are marked for deletion for 30 days. You can configure a different record age and schedule for purge jobs.

Procedure

  1. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  2. Add the dbPurge parameter to the manila template to configure database cleanup.

    The following is an example of using the dbPurge parameter to configure the Shared File Systems service:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      manila:
        template:
          dbPurge:
            age: 20
            schedule: 1 0 * * 0
    • age: The number of days a record has been marked for deletion before it is purged. The default value is 30 days. The minimum value is 1 day.
    • schedule: When to run the job in a crontab format. The default value is 1 0 * * *. This default value is equivalent to 00:01 daily.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top