Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Configuring the Block Storage service (cinder)
You can use the Block Storage service (cinder) to access remote block storage devices through volumes for persistent storage. The service has three mandatory components, api, scheduler, and volume, and one optional component, backup.
As a security hardening measure, the Block Storage services run as the cinder user.
All Block Storage services use the cinder section of the OpenStackControlPlane custom resource (CR) for their configuration:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
Global configuration options are applied directly under the cinder and template sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
<global-options>
template:
<global-options>
cinderAPI:
<cinder-api-options>
cinderScheduler:
<cinder-scheduler-options>
cinderVolumes:
<name1>: <cinder-volume-options>
<name2>: <cinder-volume-options>
cinderBackup:
<cinder-backup-options>
4.1. Block Storage service terminology and definitions Link kopierenLink in die Zwischenablage kopiert!
The following terms are important to understanding the Block Storage service (cinder):
- Storage back end: A physical storage system where volume data is stored.
-
Cinder driver: The part of the Block Storage service that enables communication with the storage back end. It is configured with the
volume_driverandbackup_driveroptions. -
Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the
volume_backend_nameoption. - Storage pool: A logical grouping of volumes in a given storage back end.
- Cinder pool: A representation in the Block Storage service of a storage pool.
-
Volume host: The way the Block Storage service address volumes. There are two different representations, short (
<hostname>@<backend-name>) and full (<hostname>@<backend-name>#<pool-name>). - Quota: Limits defined per project to constrain the use of Block Storage specific resources.
4.2. Block Storage service (cinder) enhancements in RHOSO Link kopierenLink in die Zwischenablage kopiert!
The following functionality enhancements have been integrated into the Block Storage service:
- Ease of deployment for multiple volume back ends.
- Back end deployment does not affect running volume back ends.
- Back end addition and removal does not affect running back ends.
- Back end configuration changes do not affect other running back ends.
- Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers.
- Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality.
- Improved methods for troubleshooting the service code.
4.3. Configuring transport protocols Link kopierenLink in die Zwischenablage kopiert!
You can use iSCSI, Fibre Channel, NVMe-TCP, NFS, and Red Hat Ceph Storage RBD transport protocols with the Block Storage service (cinder). Control plane services that use volumes might require iscsid and multipathd modules on RHOCP cluster nodes, configured by using a MachineConfig CR.
Using a MachineConfig CR to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a MachineConfig CR to ensure the integrity of RHOCP workloads.
The procedures in this section provide a general configuration of these protocols and are not vendor-specific.
The Block Storage volume and backup services are automatically started on data plane nodes.
4.3.1. Configuring the iSCSI protocol for volume storage Link kopierenLink in die Zwischenablage kopiert!
Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig to the applicable nodes to configure nodes to use the iSCSI protocol.
If the iscsid service module is already running, this procedure is not required.
Procedure
Create a
MachineConfigCR to configure the nodes for theiscsidmodule.The following example starts the
iscsidservice with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service- Save the file.
Apply the
MachineConfigCR file.$ oc apply -f <machine_config_file> -n openstack-
Replace
<machine_config_file>with the name of yourMachineConfigCR file.
-
Replace
4.3.2. Configuring the Fibre Channel protocol Link kopierenLink in die Zwischenablage kopiert!
There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end.
4.3.3. Configuring the NVMe-TCP protocol for volume storage Link kopierenLink in die Zwischenablage kopiert!
Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme kernel modules.
Procedure
Create a
MachineConfigCR to configure the nodes for thenvmekernel modules.The following example starts the
nvmekernel modules with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false mode: 420 user: name: root group: name: root contents: source: data:,nvme-fabrics%0Anvme-tcp- Save the file.
Apply the
MachineConfigCR file.$ oc apply -f <machine_config_file> -n openstack-
Replace
<machine_config_file>with the name of yourMachineConfigCR file.
-
Replace
After the nodes have rebooted, verify the
nvme-fabricsmodule are loaded and support ANA on a host:cat /sys/module/nvme_core/parameters/multipathNoteEven though ANA does not use the Linux Multipathing Device Mapper,
multipathdmust be running for the Compute nodes to be able to use multipathing when connecting volumes to instances.
4.4. LVM device management Link kopierenLink in die Zwischenablage kopiert!
When you use Logical Volume Management (LVM) with Block Storage service (cinder) back ends, Red Hat OpenStack Services on OpenShift (RHOSO) automatically enables device filtering through the RHEL system.devices file. LVM device filtering prevents Block Storage service volumes from being scanned by LVM on data plane nodes.
For more information about the RHEL system.devices file, see The LVM devices file in the RHEL documentation for Configuring and managing logical volumes.
4.5. Configuring multipathing for Block Storage volumes Link kopierenLink in die Zwischenablage kopiert!
Configure multipathing in Red Hat OpenStack Services on OpenShift (RHOSO) to create redundancy or improve performance. Control plane nodes require a MachineConfig CR. Data plane nodes have default multipath configuration, but you must add vendor-specific parameters for production environments.
4.5.1. Configuring multipathing on control plane nodes Link kopierenLink in die Zwischenablage kopiert!
You can configure multipathing on Red Hat OpenShift Container Platform (RHOCP) control plane nodes by creating a MachineConfig custom resource (CR) that creates a multipath configuration file and starts the service.
In Red Hat OpenStack Services on OpenShift (RHOSO) deployments, the use_multipath_for_image_xfer configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.
The example in this procedure implements a minimal multipath configuration file, which configures the default multipath parameters. However, your production deployment might also require vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment.
If you have a complex multipath configuration, you can use the Butane command-line utility to create a multipath configuration file for you. For more information, see Creating machine configs with Butane in RHOCP Installation configuration.
Procedure
Create a
MachineConfigCR to create a multipath configuration file and to start themultipathdmodule on all control plane nodes.The following example creates a
MachineConfigCR named99-worker-cinder-enable-multipathdthat implements a multipath configuration file namedmultipath.conf:ImportantWhen adding vendor-specific multipath parameters to the
contents:of this file, ensure that you do not change the specified values of the following default multipath parameters:user_friendly_names,recheck_wwid,skip_kpartx, andfind_multipaths.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false mode: 384 user: name: root group: name: root contents: source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.serviceThe
contents:data represents the following literalmultipath.conffile contents:defaults { user_friendly_names no recheck_wwid yes skip_kpartx yes find_multipaths yes } blacklist { }
-
Save the
MachineConfigCR file, for example,99-worker-cinder-enable-multipathd.yaml. Apply the
MachineConfigCR file.$ oc apply -f 99-worker-cinder-enable-multipathd.yaml -n openstack
4.5.2. Configuring custom multipath parameters on data plane nodes Link kopierenLink in die Zwischenablage kopiert!
Default multipath parameters are configured on all data plane nodes. To configure vendor-specific multipath parameters, consult with your systems administrators to obtain the required values for your custom multipath configuration file.
Ensure that you do not add the following default multipath parameters and overwrite their values: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.
Modify the OpenStackDataPlaneNodeSet custom resource (CR) to include your vendor-specific multipath parameters, and then deploy the changes by using an OpenStackDataPlaneDeployment CR.
Prerequisites
- You have created your custom multipath configuration file that contains only the vendor-specific multipath parameters and your deployment-specific values.
Procedure
Create a secret to store your custom multipath configuration file:
$ oc create secret generic <secret_name> \ --from-file=<configuration_file_name>-
Replace
<secret_name>with the name that you want to assign to the secret, for example,custom-multipath-file. -
Replace
<configuration_file_name>with the name of the custom multipath configuration file that you created, for example,custom_multipath.conf.
-
Replace
-
Open the
OpenStackDataPlaneNodeSetCR file for the node set that you want to update, for example,openstack_data_plane.yaml. Add the
edpm_multipathd_custom_config_fileparameter to theOpenStackDataPlaneNodeSetCR file:spec: ... nodeTemplate: ... ansible: ansibleVars: edpm_multipathd_custom_config_file: <configuration_file_name> ...Replace
<configuration_file_name>with the name of your custom multipath configuration file, for example,custom_multipath.conf.ImportantThe
edpm_multipathd_custom_config_fileparameter must be set to the name of your custom multipath configuration file. If this parameter is not defined or is empty, the custom multipath configuration file is not copied to the data plane nodes.
Add the
extraMountsparameter to theOpenStackDataPlaneNodeSetCR file to mount your custom multipath configuration file:spec: ... nodeTemplate: ... ansible: ... extraMounts: - extraVolType: <optional_volume_type_description> volumes: - name: <mounted_volume_name> secret: secretName: <secret_name> mounts: - name: <mounted_volume_name> mountPath: "/runner/multipath" readOnly: true-
Optional: Replace
<optional_volume_type_description>with a description of the type of the mounted volume, for example,multipath-config-file. Replace
<mounted_volume_name>with the name of the mounted volume, for example,custom-multipath.NoteDo not change the value of the
mountPath:parameter from"/runner/multipath".
-
Optional: Replace
-
Save the
OpenStackDataPlaneNodeSetCR file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f openstack_data_plane.yamlVerify that the data plane resource has been updated by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReady, the command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>-
Replace
<node_set_deployment_name>with the name of theOpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character, for example,openstack_data_plane_deploy.
-
Replace
Add the name of the
OpenStackDataPlaneNodeSetCR that you modified to thenodeSetslist:spec: nodeSets: - <node_set_name>-
Replace
<node_set_name>with the name of theOpenStackDataPlaneNodeSetCR that you modified, for example,openstack-data-plane.
-
Replace
-
Save the
OpenStackDataPlaneDeploymentCR deployment file, for example,openstack_data_plane_deploy.yaml. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
Verification
Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information about troubleshooting the deployment, see Troubleshooting the data plane creation and deployment in Deploying Red Hat OpenStack Services on OpenShift.
4.6. Configuring initial defaults Link kopierenLink in die Zwischenablage kopiert!
The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig section. Once deployed, these initial defaults are modified using the openstack client.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the Block Storage service global configuration.
The following example demonstrates a Block Storage service initial configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: customServiceConfig: | [DEFAULT] quota_volumes = 20 quota_snapshots = 15For a complete list of all initial default parameters, see Initial default parameters.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.6.1. Initial default parameters Link kopierenLink in die Zwischenablage kopiert!
These initial default parameters should be configured when the service is first enabled.
| Parameter | Description |
|---|---|
|
|
Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is |
|
|
Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is |
|
|
Provides the maximum size of each volume in gigabytes. The default is |
|
|
Provides the number of volumes allowed for each project. The default value is |
|
|
Provides the number of snapshots allowed for each project. The default value is |
|
|
Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is |
|
|
Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the |
|
|
Provides the number backups allowed for each project. The default value is |
|
|
Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is |
4.7. Configuring the API service Link kopierenLink in die Zwischenablage kopiert!
The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other OpenStack services. Red Hat OpenStack Services on OpenShift (RHOSO) supports Block Storage REST API version 3.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer.
The following example demonstrates a load balancer configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancerEdit the CR file and add the configuration for the number of API service replicas. Run the
cinderAPIservice in an Active-Active configuration with three replicas.The following example demonstrates configuring the
cinderAPIservice to use three replicas:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: replicas: 3Edit the CR file and configure
cinderAPIoptions. These options are configured in thecustomServiceConfigsection under thecinderAPIsection.The following example demonstrates configuring
cinderAPIservice options and enabling debugging on all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderAPI: customServiceConfig: | [DEFAULT] osapi_volume_workers = 3For a listing of commonly used
cinderAPIservice option parameters, see API service option parameters.- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.7.1. Block Storage API service option parameters Link kopierenLink in die Zwischenablage kopiert!
API service option parameters are provided for the configuration of the cinderAPI portions of the Block Storage service.
| Parameter | Description |
|---|---|
|
|
Provides a value to determine if the API rate limit is enabled. The default is |
|
|
Provides a value to determine whether the logging level is set to |
|
|
Provides a value for the maximum number of items a collection resource returns in a single response. The default is |
|
| Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available. |
4.8. Configuring the Block Storage scheduler service component Link kopierenLink in die Zwischenablage kopiert!
The Block Storage service (cinder) has a scheduler service (cinderScheduler) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations.
Use only a single instance of cinderScheduler for scheduling consistency and ease of troubleshooting. While cinderScheduler can be run with multiple instances, the service default replicas: 1 is the best practice.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the service down detection timeouts.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] report_interval = 20 service_down_time = 120-
report_interval: The number of seconds between Block Storage service components reporting an operational state in the form of a heartbeat through the database. The default is10. service_down_time: The maximum number of seconds since the last heartbeat from the component for it to be considered non-operational. The default is60.NoteConfigure these values at the
cinderlevel of the CR instead of thecinderSchedulerso that these values are applied to all components consistently.
-
Edit the CR file and add the configuration for the statistics reporting interval.
The following example demonstrates configuring these values at the
cinderlevel to apply them globally to all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120 backup_driver_stats_polling_interval = 120-
backend_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from the back end. The default is60. backup_driver_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from backup service. The default is60.The following example demonstrates configuring these values at the
cinderVolumeandcinderBackuplevel to customize settings at the service level.apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderBackup: customServiceConfig: | [DEFAULT] backup_driver_stats_polling_interval = 120 < rest of the config > cinderVolumes: nfs: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120NoteThe generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends.
-
Perform any additional configuration necessary to customize the
cinderSchedulerservice.For more configuration options for the customization of the
cinderSchedulerservice, see Scheduler service parameters.- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.8.1. Scheduler service parameters Link kopierenLink in die Zwischenablage kopiert!
Scheduler service parameters are provided for the configuration of the cinderScheduler portions of the Block Storage service
| Parameter | Description |
|---|---|
|
|
Provides a setting for the logging level. When this parameter is |
|
|
Provides a setting for the maximum number of attempts to schedule a volume. The default is |
|
|
Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is |
|
|
Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is |
|
|
Provides a setting for a handler to use for selecting the host or pool after weighing. The value |
The following is an explanation of the filter class names from the parameter table:
AvailabilityZoneFilter
- Filters out all back ends that do not meet the availability zone requirements of the requested volume.
CapacityFilter
- Selects only back ends with enough space to accommodate the volume.
CapabilitiesFilter
- Selects only back ends that can support any specified settings in the volume.
InstanceLocality
- Configures clusters to use volumes local to the same node.
4.9. Configuring the Block Storage volume service component Link kopierenLink in die Zwischenablage kopiert!
The Block Storage service (cinder) has a volume service (cinderVolumes section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots.
This service requires access to the storage back end (storage) and storage management (storageMgmt) networks in the networkAttachments of the OpenStackControlPlane CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access.
Volume service configuration is performed in the cinderVolumes section with parameters set in the customServiceConfig, customServiceConfigSecrets, networkAttachments, replicas, and the nodeSelector sections.
The volume service cannot have multiple replicas.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for your back end.
The following example demonstrates the service configuration for a Red Hat Ceph Storage back end:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderVolumes: ceph: networkAttachments: - storage customServiceConfig: | [ceph] volume_backend_name = ceph volume_driver = cinder.volume.drivers.rbd.RBDDriver-
ceph: The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends. -
networkAttachments: The configuration area for the back end network connections. -
volume_backend_name: The name assigned to this back end. volume_driver: The driver used to connect to this back end.For a list of commonly used volume service parameters, see Volume service parameters.
-
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.9.1. Volume service parameters Link kopierenLink in die Zwischenablage kopiert!
Volume service parameters are provided for the configuration of the cinderVolumes portions of the Block Storage service
| Parameter | Description |
|---|---|
|
|
Provides a setting for the availability zone of the back end. This is set in the |
|
| Provides a setting for the back end name for a given driver implementation. There is no default value. |
|
| Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value. |
|
|
Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a |
|
|
Provides a setting for a directory used for temporary storage during image conversion. The default value is |
|
|
Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is |
4.9.2. Block Storage service (cinder) back ends Link kopierenLink in die Zwischenablage kopiert!
Each Block Storage service back end should have an individual configuration section in the cinderVolumes section. This ensures each back end runs in a dedicated pod. This approach has the following benefits:
- Increased isolation.
- Adding and removing back ends is fast and does not affect other running back ends.
- Configuration changes do not affect other running back ends.
- Automatically spreads the Volume pods into different nodes.
Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols. Storage protocol information should also be provided in individual vendor installation guides.
Configure each back end with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the best practice.
No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment.
All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog.
For more information about integrating and certifying vendor drivers, see Integrating partner content.
For information about Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a hyperconverged infrastructure environment.
For information about configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end.
Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver.
4.9.3. Multiple Block Storage service (cinder) back ends Link kopierenLink in die Zwischenablage kopiert!
Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes configuration section. Each back end runs in an independent pod.
The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
cinderVolumes:
nfs:
networkAttachments:
- storage
customServiceConfigSecrets:
- cinder-volume-nfs-secrets
customServiceConfig: |
[nfs]
volume_backend_name=nfs
iSCSI:
networkAttachments:
- storage
- storageMgmt
customServiceConfig: |
[iscsi]
volume_backend_name=iscsi
4.10. Configuring back end availability zones Link kopierenLink in die Zwischenablage kopiert!
Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling.
For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware.
Post-deployment, AZs are created by using the RESKEY:availability_zones volume type extra specification.
Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the AZ configuration.
The following example demonstrates an AZ configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage - storageMgmt customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs backend_availability_zone=zone1 iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi backend_availability_zone=zone2-
backend_availability_zone: The availability zone associated with the back end.
-
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.11. Configuring a generic NFS storage back end for volumes Link kopierenLink in die Zwischenablage kopiert!
The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups.
- Limitations
- Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
- For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS.
RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the
customServiceConfigin the specific back-end configuration with the following parameters:nas_secure_file_operation=false nas_secure_file_permissions=false-
Do not configure the
nfs_mount_optionsoption. The default value is the best NFS option for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.
Procedure
Create a
SecretCR to store the volume connection information.The following is an example of a
SecretCR:apiVersion: v1 kind: Secret metadata: name: cinder-volume-nfs-secrets type: Opaque stringData: cinder-volume-nfs-secrets: | [nfs] nas_host=192.168.130.1 nas_share_path=/var/nfs/cinderwhere:
name-
Is the name used when including it in the
cinderVolumesback end configuration.
- Save the file.
Update the control plane:
$ oc apply -f <secret_file_name> -n openstack-
Replace
<secret_file_name>with the name of the file that contains yourSecretCR.
-
Replace
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the generic NFS back end.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage customServiceConfig: | [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false customServiceConfigSecrets: - cinder-volume-nfs-secrets-
The
storageMgmtnetwork is not listed because generic NFS does not have a management interface. -
cinder-volume-nfs-secret: The name from theSecretCR. - If you are configuring multiple generic NFS back ends, ensure that each back end is in an individual configuration section so that one pod is dedicated to each back end.
-
The
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.12. Configuring NFS storage for volume format conversion Link kopierenLink in die Zwischenablage kopiert!
When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the directory for converting large Image service images.
The following example demonstrates how to configure this conversion directory:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: extraVol: - propagation: - CinderVolume volumes: - name: cinder-conversion nfs: path: <nfs_share_path> server: <nfs_server> mounts: - name: cinder-conversion mountPath: /var/lib/cinder/conversion ...Replace
<nfs_share_path>with the path to the conversion directory.NoteThe Block Storage volume service (cinder-volume) runs as the
cinderuser. Thecinderuser requires write permission for<nfs_share_path>. You can configure this by running the following command on the NFS server:$ chown 42407:42407 <nfs_share_path>.-
Replace
<nfs_server>with the IP address of the NFS server that hosts the conversion directory.
NoteThis example demonstrates how to create a common conversion directory that all the volume service pods use.
You can also define a conversion directory for each volume service pod:
-
Define each conversion directory by using an
extraMountssection, as demonstrated above, in thecindersection of theOpenStackControlPlaneCR file. -
Set the
propagationvalue to the name of the specific Volume section instead ofCinderVolume.
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.13. Configuring automatic database cleanup Link kopierenLink in die Zwischenablage kopiert!
The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources.
These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs.
Procedure
-
Open your
openstack_control_plane.yamlfile to edit theOpenStackControlPlaneCR. Add the
dbPurgeparameter to thecindertemplate to configure database cleanup depending on the service you want to configure.The following is an example of using the
dbPurgeparameter to configure the Block Storage service:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: dbPurge: age: 20 schedule: 1 0 * * 0-
age: The number of days a record has been marked for deletion before it is purged. The default value is30. The minimum value is1. -
schedule: When to run the job in a crontab format. The default value is1 0 * * *. This default value is equivalent to00:01daily.
-
Update the control plane:
$ oc apply -f openstack_control_plane.yaml
4.14. Preserving backup jobs during Block Storage service updates Link kopierenLink in die Zwischenablage kopiert!
The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs.
If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob field in your OpenStackControlPlane CR to stop the automatic removal of jobs and preserve them.
Example:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
preserveJobs: true
4.15. Resolving hostname conflicts in backup services Link kopierenLink in die Zwischenablage kopiert!
Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN.
Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead.
These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0 * For backups: cinder-backup-<replica-number>
If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames field.
When you set the uniquePodNames field to true, a short hash is added to the pod names, which addresses hostname conflicts.
Example:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
uniquePodNames: true
4.16. Using other container images Link kopierenLink in die Zwischenablage kopiert!
Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed by using a container image for a specific release and version. Sometimes, a deployment requires a container image other than the one produced for that release and version.
The most common reasons for using a container image that is not for a specific release and version are:
- Deploying a hotfix.
- Using a certified, vendor-provided container image.
The container images used by the installer are controlled through the OpenStackVersion CR. An OpenStackVersion CR is automatically created by the openstack operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane CR but after the openstack operator is installed. This allows for the container image for any service and component to be individually designated.
The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI, cinderScheduler, and cinderBackup pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes.
The following example demonstrates a OpenStackControlPlane configuration with two back ends; one called ceph and one called custom-fc. The custom-fc backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
cinderVolumes:
ceph:
networkAttachments:
- storage
< . . . >
custom-fc:
networkAttachments:
- storage
The following example demonstrates what our OpenStackVersion CR might look like in order to set up the container images properly.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
name: openstack
spec:
customContainerImages:
cinderAPIImages: <custom-api-image>
cinderBackupImages: <custom-backup-image>
cinderSchedulerImages: <custom-scheduler-image>
cinderVolumeImages:
custom-fc: <vendor-volume-volume-image>
-
Replace
<custom-api-image>with the name of the API service image to use. -
Replace
<custom-backup-image>with the name of the Backup service image to use. -
Replace
<custom-scheduler-image>with the name of the Scheduler service image to use. -
Replace
<vendor-volume-volume-image>with the name of the certified, vendor-provided image to use.
The name attribute in your OpenStackVersion CR must match the same attribute in your OpenStackControlPlane CR.