Chapter 4. Configuring the Block Storage service (cinder)


You can use the Block Storage service (cinder) to access remote block storage devices through volumes for persistent storage. The service has three mandatory components, api, scheduler, and volume, and one optional component, backup.

Note

As a security hardening measure, the Block Storage services run as the cinder user.

All Block Storage services use the cinder section of the OpenStackControlPlane custom resource (CR) for their configuration:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:

Global configuration options are applied directly under the cinder and template sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    <global-options>
    template:
      <global-options>
      cinderAPI:
        <cinder-api-options>
      cinderScheduler:
        <cinder-scheduler-options>
      cinderVolumes:
        <name1>: <cinder-volume-options>
        <name2>: <cinder-volume-options>
      cinderBackup:
        <cinder-backup-options>

The following terms are important to understanding the Block Storage service (cinder):

  • Storage back end: A physical storage system where volume data is stored.
  • Cinder driver: The part of the Block Storage service that enables communication with the storage back end. It is configured with the volume_driver and backup_driver options.
  • Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the volume_backend_name option.
  • Storage pool: A logical grouping of volumes in a given storage back end.
  • Cinder pool: A representation in the Block Storage service of a storage pool.
  • Volume host: The way the Block Storage service address volumes. There are two different representations, short (<hostname>@<backend-name>) and full (<hostname>@<backend-name>#<pool-name>).
  • Quota: Limits defined per project to constrain the use of Block Storage specific resources.

The following functionality enhancements have been integrated into the Block Storage service:

  • Ease of deployment for multiple volume back ends.
  • Back end deployment does not affect running volume back ends.
  • Back end addition and removal does not affect running back ends.
  • Back end configuration changes do not affect other running back ends.
  • Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers.
  • Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality.
  • Improved methods for troubleshooting the service code.

4.3. Configuring transport protocols

You can use iSCSI, Fibre Channel, NVMe-TCP, NFS, and Red Hat Ceph Storage RBD transport protocols with the Block Storage service (cinder). Control plane services that use volumes might require iscsid and multipathd modules on RHOCP cluster nodes, configured by using a MachineConfig CR.

Important

Using a MachineConfig CR to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a MachineConfig CR to ensure the integrity of RHOCP workloads.

The procedures in this section provide a general configuration of these protocols and are not vendor-specific.

Note

The Block Storage volume and backup services are automatically started on data plane nodes.

Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig to the applicable nodes to configure nodes to use the iSCSI protocol.

Note

If the iscsid service module is already running, this procedure is not required.

Procedure

  1. Create a MachineConfig CR to configure the nodes for the iscsid module.

    The following example starts the iscsid service with a default configuration in all RHOCP worker nodes:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-enable-iscsid
    spec:
      config:
        ignition:
          version: 3.2.0
        systemd:
          units:
          - enabled: true
            name: iscsid.service
  2. Save the file.
  3. Apply the MachineConfig CR file.

    $ oc apply -f <machine_config_file> -n openstack
    • Replace <machine_config_file> with the name of your MachineConfig CR file.

4.3.2. Configuring the Fibre Channel protocol

There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end.

Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme kernel modules.

Procedure

  1. Create a MachineConfig CR to configure the nodes for the nvme kernel modules.

    The following example starts the nvme kernel modules with a default configuration in all RHOCP worker nodes:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-load-nvme-fabrics
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/modules-load.d/nvme_fabrics.conf
              overwrite: false
              mode: 420
              user:
                name: root
              group:
                name: root
              contents:
                source: data:,nvme-fabrics%0Anvme-tcp
  2. Save the file.
  3. Apply the MachineConfig CR file.

    $ oc apply -f <machine_config_file> -n openstack
    • Replace <machine_config_file> with the name of your MachineConfig CR file.
  4. After the nodes have rebooted, verify the nvme-fabrics module are loaded and support ANA on a host:

    cat /sys/module/nvme_core/parameters/multipath
    Note

    Even though ANA does not use the Linux Multipathing Device Mapper, multipathd must be running for the Compute nodes to be able to use multipathing when connecting volumes to instances.

4.4. LVM device management

When you use Logical Volume Management (LVM) with Block Storage service (cinder) back ends, Red Hat OpenStack Services on OpenShift (RHOSO) automatically enables device filtering through the RHEL system.devices file. LVM device filtering prevents Block Storage service volumes from being scanned by LVM on data plane nodes.

For more information about the RHEL system.devices file, see The LVM devices file in the RHEL documentation for Configuring and managing logical volumes.

Configure multipathing in Red Hat OpenStack Services on OpenShift (RHOSO) to create redundancy or improve performance. Control plane nodes require a MachineConfig CR. Data plane nodes have default multipath configuration, but you must add vendor-specific parameters for production environments.

You can configure multipathing on Red Hat OpenShift Container Platform (RHOCP) control plane nodes by creating a MachineConfig custom resource (CR) that creates a multipath configuration file and starts the service.

In Red Hat OpenStack Services on OpenShift (RHOSO) deployments, the use_multipath_for_image_xfer configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.

The example in this procedure implements a minimal multipath configuration file, which configures the default multipath parameters. However, your production deployment might also require vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment.

If you have a complex multipath configuration, you can use the Butane command-line utility to create a multipath configuration file for you. For more information, see Creating machine configs with Butane in RHOCP Installation configuration.

Procedure

  1. Create a MachineConfig CR to create a multipath configuration file and to start the multipathd module on all control plane nodes.

    The following example creates a MachineConfig CR named 99-worker-cinder-enable-multipathd that implements a multipath configuration file named multipath.conf:

    Important

    When adding vendor-specific multipath parameters to the contents: of this file, ensure that you do not change the specified values of the following default multipath parameters: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
        service: cinder
      name: 99-worker-cinder-enable-multipathd
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/multipath.conf
              overwrite: false
              mode: 384
              user:
                name: root
              group:
                name: root
              contents:
                source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D
        systemd:
          units:
          - enabled: true
            name: multipathd.service
    • The contents: data represents the following literal multipath.conf file contents:

      defaults {
        user_friendly_names no
        recheck_wwid yes
        skip_kpartx yes
        find_multipaths yes
      }
      
      blacklist {
      }
  2. Save the MachineConfig CR file, for example, 99-worker-cinder-enable-multipathd.yaml.
  3. Apply the MachineConfig CR file.

    $ oc apply -f 99-worker-cinder-enable-multipathd.yaml -n openstack

Default multipath parameters are configured on all data plane nodes. To configure vendor-specific multipath parameters, consult with your systems administrators to obtain the required values for your custom multipath configuration file.

Important

Ensure that you do not add the following default multipath parameters and overwrite their values: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.

Modify the OpenStackDataPlaneNodeSet custom resource (CR) to include your vendor-specific multipath parameters, and then deploy the changes by using an OpenStackDataPlaneDeployment CR.

Prerequisites

  • You have created your custom multipath configuration file that contains only the vendor-specific multipath parameters and your deployment-specific values.

Procedure

  1. Create a secret to store your custom multipath configuration file:

    $ oc create secret generic <secret_name> \
    --from-file=<configuration_file_name>
    • Replace <secret_name> with the name that you want to assign to the secret, for example, custom-multipath-file.
    • Replace <configuration_file_name> with the name of the custom multipath configuration file that you created, for example, custom_multipath.conf.
  2. Open the OpenStackDataPlaneNodeSet CR file for the node set that you want to update, for example, openstack_data_plane.yaml.
  3. Add the edpm_multipathd_custom_config_file parameter to the OpenStackDataPlaneNodeSet CR file:

    spec:
        ...
        nodeTemplate:
            ...
            ansible:
              ansibleVars:
                edpm_multipathd_custom_config_file: <configuration_file_name>
            ...
    • Replace <configuration_file_name> with the name of your custom multipath configuration file, for example, custom_multipath.conf.

      Important

      The edpm_multipathd_custom_config_file parameter must be set to the name of your custom multipath configuration file. If this parameter is not defined or is empty, the custom multipath configuration file is not copied to the data plane nodes.

  4. Add the extraMounts parameter to the OpenStackDataPlaneNodeSet CR file to mount your custom multipath configuration file:

    spec:
        ...
        nodeTemplate:
            ...
            ansible:
              ...
            extraMounts:
            - extraVolType: <optional_volume_type_description>
              volumes:
              - name: <mounted_volume_name>
                secret:
                  secretName: <secret_name>
              mounts:
              - name: <mounted_volume_name>
                mountPath: "/runner/multipath"
                readOnly: true
    • Optional: Replace <optional_volume_type_description> with a description of the type of the mounted volume, for example, multipath-config-file.
    • Replace <mounted_volume_name> with the name of the mounted volume, for example, custom-multipath.

      Note

      Do not change the value of the mountPath: parameter from "/runner/multipath".

  5. Save the OpenStackDataPlaneNodeSet CR file.
  6. Apply the updated OpenStackDataPlaneNodeSet CR configuration:

    $ oc apply -f openstack_data_plane.yaml
  7. Verify that the data plane resource has been updated by confirming that the status is SetupReady:

    $ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m

    When the status is SetupReady, the command returns a condition met message, otherwise it returns a timeout error.

    For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

  8. Create a file on your workstation to define the OpenStackDataPlaneDeployment CR:

    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: <node_set_deployment_name>
    • Replace <node_set_deployment_name> with the name of the OpenStackDataPlaneDeployment CR. The name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character, for example, openstack_data_plane_deploy.
  9. Add the name of the OpenStackDataPlaneNodeSet CR that you modified to the nodeSets list:

    spec:
      nodeSets:
        - <node_set_name>
    • Replace <node_set_name> with the name of the OpenStackDataPlaneNodeSet CR that you modified, for example, openstack-data-plane.
  10. Save the OpenStackDataPlaneDeployment CR deployment file, for example, openstack_data_plane_deploy.yaml.
  11. Deploy the modified OpenStackDataPlaneNodeSet CR:

    $ oc create -f openstack_data_plane_deploy.yaml -n openstack

    You can view the Ansible logs while the deployment executes:

    $ oc get pod -l app=openstackansibleee -w
    $ oc logs -l app=openstackansibleee -f --max-log-requests 10

    If the oc logs command returns an error similar to the following error, increase the --max-log-requests value:

    error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit

Verification

  • Verify that the modified OpenStackDataPlaneNodeSet CR is deployed:

    $ oc get openstackdataplanedeployment -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     Setup Complete
    
    
    $ oc get openstackdataplanenodeset -n openstack
    NAME             	STATUS   MESSAGE
    openstack-data-plane   True     NodeSet Ready

    For information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.

    If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information about troubleshooting the deployment, see Troubleshooting the data plane creation and deployment in Deploying Red Hat OpenStack Services on OpenShift.

4.6. Configuring initial defaults

The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig section. Once deployed, these initial defaults are modified using the openstack client.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the Block Storage service global configuration.

    The following example demonstrates a Block Storage service initial configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        enabled: true
        template:
          customServiceConfig: |
            [DEFAULT]
            quota_volumes = 20
            quota_snapshots = 15

    For a complete list of all initial default parameters, see Initial default parameters.

  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  4. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.6.1. Initial default parameters

These initial default parameters should be configured when the service is first enabled.

Expand
ParameterDescription

default_volume_type

Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is __DEFAULT__.

no_snapshot_gb_quota

Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is false, which means that the size of the snapshots are included in the gigabyle quota.

per_volume_size_limit

Provides the maximum size of each volume in gigabytes. The default is -1 (unlimited).

quota_volumes

Provides the number of volumes allowed for each project. The default value is 10.

quota_snapshots

Provides the number of snapshots allowed for each project. The default value is 10.

quota_groups

Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is 10.

quota_gigabytes

Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the no_snapshot_gb_quota initial parameter this might also include the size of the snapshots. The default values, also count the size of the snapshots against this limit of 1000 GB.

quota_backups

Provides the number backups allowed for each project. The default value is 10.

quota_backup_gigabytes

Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is 1000.

4.7. Configuring the API service

The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other OpenStack services. Red Hat OpenStack Services on OpenShift (RHOSO) supports Block Storage REST API version 3.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer.

    The following example demonstrates a load balancer configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderAPI:
            override:
              service:
                internal:
                  metadata:
                    annotations:
                      metallb.universe.tf/address-pool: internalapi
                      metallb.universe.tf/allow-shared-ip: internalapi
                      metallb.universe.tf/loadBalancerIPs: 172.17.0.80
                  spec:
                    type: LoadBalancer
  3. Edit the CR file and add the configuration for the number of API service replicas. Run the cinderAPI service in an Active-Active configuration with three replicas.

    The following example demonstrates configuring the cinderAPI service to use three replicas:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderAPI:
            replicas: 3
  4. Edit the CR file and configure cinderAPI options. These options are configured in the customServiceConfig section under the cinderAPI section.

    The following example demonstrates configuring cinderAPI service options and enabling debugging on all services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug = true
          cinderAPI:
            customServiceConfig: |
              [DEFAULT]
              osapi_volume_workers = 3

    For a listing of commonly used cinderAPI service option parameters, see API service option parameters.

  5. Save the file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.7.1. Block Storage API service option parameters

API service option parameters are provided for the configuration of the cinderAPI portions of the Block Storage service.

Expand
ParameterDescription

api_rate_limit

Provides a value to determine if the API rate limit is enabled. The default is false.

debug

Provides a value to determine whether the logging level is set to DEBUG instead of the default of INFO. The default is false. The logging level can be dynamically set without restarting.

osapi_max_limit

Provides a value for the maximum number of items a collection resource returns in a single response. The default is 1000.

osapi_volume_workers

Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available.

The Block Storage service (cinder) has a scheduler service (cinderScheduler) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations.

Use only a single instance of cinderScheduler for scheduling consistency and ease of troubleshooting. While cinderScheduler can be run with multiple instances, the service default replicas: 1 is the best practice.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the service down detection timeouts.

    The following example demonstrates this configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            report_interval = 20
            service_down_time = 120
    • report_interval: The number of seconds between Block Storage service components reporting an operational state in the form of a heartbeat through the database. The default is 10.
    • service_down_time: The maximum number of seconds since the last heartbeat from the component for it to be considered non-operational. The default is 60.

      Note

      Configure these values at the cinder level of the CR instead of the cinderScheduler so that these values are applied to all components consistently.

  3. Edit the CR file and add the configuration for the statistics reporting interval.

    The following example demonstrates configuring these values at the cinder level to apply them globally to all services:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            backend_stats_polling_interval = 120
            backup_driver_stats_polling_interval = 120
    • backend_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from the back end. The default is 60.
    • backup_driver_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from backup service. The default is 60.

      The following example demonstrates configuring these values at the cinderVolume and cinderBackup level to customize settings at the service level.

      apiVersion: core.openstack.org/v1beta1
      kind: OpenStackControlPlane
      metadata:
        name: openstack
      spec:
        cinder:
          template:
            cinderBackup:
              customServiceConfig: |
                [DEFAULT]
                backup_driver_stats_polling_interval = 120
                < rest of the config >
            cinderVolumes:
              nfs:
                customServiceConfig: |
                  [DEFAULT]
                  backend_stats_polling_interval = 120
      Note

      The generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends.

  4. Perform any additional configuration necessary to customize the cinderScheduler service.

    For more configuration options for the customization of the cinderScheduler service, see Scheduler service parameters.

  5. Save the file.
  6. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  7. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.8.1. Scheduler service parameters

Scheduler service parameters are provided for the configuration of the cinderScheduler portions of the Block Storage service

Expand
ParameterDescription

debug

Provides a setting for the logging level. When this parameter is true the logging level is set to DEBUG instead of INFO. The default is false.

scheduler_max_attempts

Provides a setting for the maximum number of attempts to schedule a volume. The default is 3

scheduler_default_filters

Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter.

scheduler_default_weighers

Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is CapacityWeigher.

scheduler_weight_handler

Provides a setting for a handler to use for selecting the host or pool after weighing. The value cinder.scheduler.weights.OrderedHostWeightHandler selects the first host from the list of hosts that passed filtering and the value cinder.scheduler.weights.stochastic.stochasticHostWeightHandler gives every pool a chance to be chosen where the probability is proportional to each pool weight. The default is cinder.scheduler.weights.OrderedHostWeightHandler.

The following is an explanation of the filter class names from the parameter table:

  • AvailabilityZoneFilter

    • Filters out all back ends that do not meet the availability zone requirements of the requested volume.
  • CapacityFilter

    • Selects only back ends with enough space to accommodate the volume.
  • CapabilitiesFilter

    • Selects only back ends that can support any specified settings in the volume.
  • InstanceLocality

    • Configures clusters to use volumes local to the same node.

The Block Storage service (cinder) has a volume service (cinderVolumes section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots.

This service requires access to the storage back end (storage) and storage management (storageMgmt) networks in the networkAttachments of the OpenStackControlPlane CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access.

Volume service configuration is performed in the cinderVolumes section with parameters set in the customServiceConfig, customServiceConfigSecrets, networkAttachments, replicas, and the nodeSelector sections.

The volume service cannot have multiple replicas.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for your back end.

    The following example demonstrates the service configuration for a Red Hat Ceph Storage back end:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          customServiceConfig: |
            [DEFAULT]
            debug = true
          cinderVolumes:
            ceph:
              networkAttachments:
              - storage
              customServiceConfig: |
                [ceph]
                volume_backend_name = ceph
                volume_driver = cinder.volume.drivers.rbd.RBDDriver
    • ceph: The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends.
    • networkAttachments: The configuration area for the back end network connections.
    • volume_backend_name: The name assigned to this back end.
    • volume_driver: The driver used to connect to this back end.

      For a list of commonly used volume service parameters, see Volume service parameters.

  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.9.1. Volume service parameters

Volume service parameters are provided for the configuration of the cinderVolumes portions of the Block Storage service

Expand
ParameterDescription

backend_availability_zone

Provides a setting for the availability zone of the back end. This is set in the [DEFAULT] section. The default value is storage_availability_zone.

volume_backend_name

Provides a setting for the back end name for a given driver implementation. There is no default value.

volume_driver

Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value.

enabled_backends

Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a volume_backend_name option.

image_conversion_dir

Provides a setting for a directory used for temporary storage during image conversion. The default value is /var/lib/cinder/conversion.

backend_stats_polling_interval

Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is 60.

4.9.2. Block Storage service (cinder) back ends

Each Block Storage service back end should have an individual configuration section in the cinderVolumes section. This ensures each back end runs in a dedicated pod. This approach has the following benefits:

  • Increased isolation.
  • Adding and removing back ends is fast and does not affect other running back ends.
  • Configuration changes do not affect other running back ends.
  • Automatically spreads the Volume pods into different nodes.

Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols. Storage protocol information should also be provided in individual vendor installation guides.

Note

Configure each back end with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the best practice.

No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment.

All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog.

For more information about integrating and certifying vendor drivers, see Integrating partner content.

For information about Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a hyperconverged infrastructure environment.

For information about configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end.

Note

Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver.

Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes configuration section. Each back end runs in an independent pod.

The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      cinderVolumes:
        nfs:
          networkAttachments:
          - storage
          customServiceConfigSecrets:
          - cinder-volume-nfs-secrets
          customServiceConfig: |
        	[nfs]
        	volume_backend_name=nfs
        iSCSI:
          networkAttachments:
          - storage
          - storageMgmt
          customServiceConfig: |
        	[iscsi]
        	volume_backend_name=iscsi

4.10. Configuring back end availability zones

Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling.

For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware.

Note

Post-deployment, AZs are created by using the RESKEY:availability_zones volume type extra specification.

Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the AZ configuration.

    The following example demonstrates an AZ configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
    name: openstack
    spec:
      cinder:
        template:
          cinderVolumes:
            nfs:
              networkAttachments:
              - storage
              - storageMgmt
              customServiceConfigSecrets:
              - cinder-volume-nfs-secrets
              customServiceConfig: |
                    [nfs]
                    volume_backend_name=nfs
                    backend_availability_zone=zone1
            iSCSI:
              networkAttachments:
              - storage
              - storageMgmt
              customServiceConfig: |
                    [iscsi]
                    volume_backend_name=iscsi
                    backend_availability_zone=zone2
    • backend_availability_zone: The availability zone associated with the back end.
  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups.

Limitations
  • Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
  • For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS.
  • RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the customServiceConfig in the specific back-end configuration with the following parameters:

    nas_secure_file_operation=false
    nas_secure_file_permissions=false
  • Do not configure the nfs_mount_options option. The default value is the best NFS option for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.

Procedure

  1. Create a Secret CR to store the volume connection information.

    The following is an example of a Secret CR:

    apiVersion: v1
    kind: Secret
    metadata:
      name: cinder-volume-nfs-secrets
    type: Opaque
    stringData:
      cinder-volume-nfs-secrets: |
    	[nfs]
    	nas_host=192.168.130.1
    	nas_share_path=/var/nfs/cinder

    where:

    name
    Is the name used when including it in the cinderVolumes back end configuration.
  2. Save the file.
  3. Update the control plane:

    $ oc apply -f <secret_file_name> -n openstack
    • Replace <secret_file_name> with the name of the file that contains your Secret CR.
  4. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  5. Edit the CR file and add the configuration for the generic NFS back end.

    The following example demonstrates this configuration:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          cinderVolumes:
            nfs:
              networkAttachments:
              - storage
              customServiceConfig: |
                [nfs]
                volume_backend_name=nfs
                volume_driver=cinder.volume.drivers.nfs.NfsDriver
                nfs_snapshot_support=true
                nas_secure_file_operations=false
                nas_secure_file_permissions=false
              customServiceConfigSecrets:
              - cinder-volume-nfs-secrets
    • The storageMgmt network is not listed because generic NFS does not have a management interface.
    • cinder-volume-nfs-secret: The name from the Secret CR.
    • If you are configuring multiple generic NFS back ends, ensure that each back end is in an individual configuration section so that one pod is dedicated to each back end.
  6. Save the file.
  7. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  8. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml.
  2. Edit the CR file and add the configuration for the directory for converting large Image service images.

    The following example demonstrates how to configure this conversion directory:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    spec:
      extraMounts:
          extraVol:
            - propagation:
              - CinderVolume
              volumes:
              - name: cinder-conversion
                nfs:
                    path: <nfs_share_path>
                    server: <nfs_server>
              mounts:
              - name: cinder-conversion
                mountPath: /var/lib/cinder/conversion
    ...
    • Replace <nfs_share_path> with the path to the conversion directory.

      Note

      The Block Storage volume service (cinder-volume) runs as the cinder user. The cinder user requires write permission for <nfs_share_path>. You can configure this by running the following command on the NFS server: $ chown 42407:42407 <nfs_share_path>.

    • Replace <nfs_server> with the IP address of the NFS server that hosts the conversion directory.
    Note

    This example demonstrates how to create a common conversion directory that all the volume service pods use.

    You can also define a conversion directory for each volume service pod:

    • Define each conversion directory by using an extraMounts section, as demonstrated above, in the cinder section of the OpenStackControlPlane CR file.
    • Set the propagation value to the name of the specific Volume section instead of CinderVolume.
  3. Save the file.
  4. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  5. Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

4.13. Configuring automatic database cleanup

The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources.

These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs.

Procedure

  1. Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR.
  2. Add the dbPurge parameter to the cinder template to configure database cleanup depending on the service you want to configure.

    The following is an example of using the dbPurge parameter to configure the Block Storage service:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack
    spec:
      cinder:
        template:
          dbPurge:
            age: 20
            schedule: 1 0 * * 0
    • age: The number of days a record has been marked for deletion before it is purged. The default value is 30. The minimum value is 1.
    • schedule: When to run the job in a crontab format. The default value is 1 0 * * *. This default value is equivalent to 00:01 daily.
  3. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml

The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs.

If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob field in your OpenStackControlPlane CR to stop the automatic removal of jobs and preserve them.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      preserveJobs: true

Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN.

Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead.

These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0 * For backups: cinder-backup-<replica-number>

If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames field.

When you set the uniquePodNames field to true, a short hash is added to the pod names, which addresses hostname conflicts.

Example:

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    uniquePodNames: true

4.16. Using other container images

Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed by using a container image for a specific release and version. Sometimes, a deployment requires a container image other than the one produced for that release and version.

The most common reasons for using a container image that is not for a specific release and version are:

  • Deploying a hotfix.
  • Using a certified, vendor-provided container image.

The container images used by the installer are controlled through the OpenStackVersion CR. An OpenStackVersion CR is automatically created by the openstack operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane CR but after the openstack operator is installed. This allows for the container image for any service and component to be individually designated.

The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI, cinderScheduler, and cinderBackup pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes.

The following example demonstrates a OpenStackControlPlane configuration with two back ends; one called ceph and one called custom-fc. The custom-fc backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack
spec:
  cinder:
    template:
      cinderVolumes:
        ceph:
          networkAttachments:
          - storage
< . . . >
        custom-fc:
          networkAttachments:
          - storage

The following example demonstrates what our OpenStackVersion CR might look like in order to set up the container images properly.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
  name: openstack
spec:
  customContainerImages:
    cinderAPIImages: <custom-api-image>
    cinderBackupImages: <custom-backup-image>
    cinderSchedulerImages: <custom-scheduler-image>
    cinderVolumeImages:
      custom-fc: <vendor-volume-volume-image>
  • Replace <custom-api-image> with the name of the API service image to use.
  • Replace <custom-backup-image> with the name of the Backup service image to use.
  • Replace <custom-scheduler-image> with the name of the Scheduler service image to use.
  • Replace <vendor-volume-volume-image> with the name of the certified, vendor-provided image to use.
Note

The name attribute in your OpenStackVersion CR must match the same attribute in your OpenStackControlPlane CR.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top