Chapter 3. Configuring the Block Storage service (cinder)
The Block Storage service (cinder) provides access to remote block storage devices through volumes to provide persistent storage. The Block Storage service has three mandatory services; api
, scheduler
, and volume
; and one optional service, backup
.
All Block Storage services use the cinder
section of the OpenStackControlPlane
custom resource (CR) for their configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder:
Global configuration options are applied directly under the cinder
and template
sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: <global-options> template: <global-options> cinderAPI: <cinder-api-options> cinderScheduler: <cinder-scheduler-options> cinderVolumes: <name1>: <cinder-volume-options> <name2>: <cinder-volume-options> cinderBackup: <cinder-backup-options>
3.1. Terminology
The following terms are important to understanding the Block Storage service (cinder):
- Storage back end: A physical storage system where volume data is stored.
-
Cinder driver: The part of the Block Storage service that enables communication with the storage back end.It is configured with the
volume_driver
andbackup_driver
options. -
Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the
volume_backend_name
option. - Storage pool: A logical grouping of volumes in a given storage back end.
- Cinder pool: A representation in the Block Storage service of a storage pool.
-
Volume host: The way the Block Storage service address volumes. There are two different representations, short (
<hostname>@<backend-name>
) and full (<hostname>@<backend-name>#<pool-name>
). - Quota: Limits defined per project to constrain the use of Block Storage specific resources.
3.2. Block Storage service (cinder) enhancements in Red Hat OpenStack Services on OpenShift (RHOSO)
The following functionality enhancements have been integrated into the Block Storage service:
- Ease of deployment for multiple volume back ends.
- Back end deployment does not affect running volume back ends.
- Back end addition and removal does not affect running back ends.
- Back end configuration changes do not affect other running back ends.
- Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers.
- Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality.
- Improved methods for troubleshooting the service code.
3.3. Configuring transport protocols
Deployments use different transport protocols to connect to volumes. The Block Storage service (cinder) supports the following transport protocols:
- iSCSI
- Fibre Channel (FC)
- NVMe over TCP (NVMe-TCP)
- NFS
- Red Hat Ceph Storage RBD
Control plane services that use volumes, such as the Block Storage service (cinder) volume
and backup
services, may require the support of the Red Hat OpenShift Container Platform (RHOCP) cluster to use iscsid
and multipathd
modules, depending on the storage array in use. These modules must be available on all nodes where these volume-dependent services execute. To use these transport protocols, a MachineConfig
CR is created to define where these modules execute. For more information on a MachineConfig
, see Understanding the Machine Config operator.
Using a MachineConfig
to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a 'MachineConfig` to ensure the integrity of RHOCP workloads.
The procedures in this section are meant as a guide to the general configuration of these protocols. Storage back end vendors will supply configuration information on how to connect to their specific solution.
In addition to protocol specific configuration, Red Hat recommends the configuration of multipathing regardless of the transport protocol used. After you have completed the transport protocol configuration, see Configuring multipathing for the procedure.
These services are automatically started on EDPM nodes.
3.3.1. Configuring the iSCSI protocol
Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid
service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig
to the applicable nodes to configure nodes to use the iSCSI protocol.
If the iscsid
service module is already running, this procedure is not required.
Procedure
Create a
MachineConfig
CR to configure the nodes for theiscsid
module.The following example starts the
iscsid
service with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service
- Save the file.
Apply the
MachineConfig
CR file.$ oc apply -f <machine_config_file> -n openstack
-
Replace
<machine_config_file>
with the name of yourMachineConfig
CR file.
-
Replace
3.3.2. Configuring the Fibre Channel protocol
There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector
in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end.
3.3.3. Configuring the NVMe over TCP (NVMe-TCP) protocol
Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme
kernel modules.
Procedure
Create a
MachineConfig
CR to configure the nodes for thenvme
kernel modules.The following example starts the
nvme
kernel modules with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false mode: 420 user: name: root group: name: root contents: source: data:,nvme-fabrics%0Anvme-tcp
- Save the file.
Apply the
MachineConfig
CR file.$ oc apply -f <machine_config_file> -n openstack
-
Replace
<machine_config_file>
with the name of yourMachineConfig
CR file.
-
Replace
After the nodes have rebooted, verify the
nvme-fabrics
module are loaded and support ANA on a host:cat /sys/module/nvme_core/parameters/multipath
NoteEven though ANA does not use the Linux Multipathing Device Mapper,
multipathd
must be running for the Compute nodes to be able to use multipathing when connecting volumes to instances.
3.3.4. Configuring multipathing
Configuring multipathing on RHOCP nodes requires a MachineConfig
CR that creates a multipath.conf
file on a node and starts the service.
The example provided in this procedure creates only a minimal multipath.conf
file. Production deployments may require hardware vendor specific additions as appropriate to your environment. Consult with the appropriate systems administrators for any values required for your deployment.
Procedure
Create a
MachineConfig
CR to configure the nodes multipathing.The following example creates a
multipath.conf
file and starts themultipathd
module on all RHOCP nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false mode: 384 user: name: root group: name: root contents: source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service
NoteThe following would be the contents of the
multipath.conf
created by this example:defaults { user_friendly_names no recheck_wwid yes skip_kpartx yes find_multipaths yes } blacklist { }
- Save the file.
Apply the
MachineConfig
CR file.$ oc apply -f <machine_config_file> -n openstack
Replace
<machine_config_file>
with the name of yourMachineConfig
CR file.NoteIn RHOSO deployments, the
use_multipath_for_image_xfer
configuration option is enabled by default.
3.4. Configuring initial defaults
The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig
section. Once deployed, these initial defaults are modified using the openstack
client.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the Block Storage service global configuration.
The following example demonstrates a Block Storage service initial configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: customServiceConfig: | [DEFAULT] quota_volumes = 20 quota_snapshots = 15
For a complete list of all initial default parameters, see Initial default parameters.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
3.4.1. Initial default parameters
These initial default parameters should be configured when the service is first enabled.
Parameter | Description |
---|---|
|
Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is |
|
Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is |
|
Provides the maximum size of each volume in gigabytes. The default is |
|
Provides the number of volumes allowed for each project. The default value is |
|
Provides the number of snapshots allowed for each project. The default value is |
|
Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is |
|
Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the |
|
Provides the number backups allowed for each project. The default value is |
|
Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is |
3.5. Configuring the API service
The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other RHOSO services.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer.
The following example demonstrates a load balancer configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer
Edit the CR file and add the configuration for the number of API service replicas. Red Hat recommends running the
cinderAPI
service in an Active-Active configuration with three replicas.The following example demonstrates configure the
cinderAPI
service to use three replicas:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: replicas: 3
Edit the CR file and configure
cinderAPI
options. These options are configured in thecustomServiceConfig
section under thecinderAPI
section.The following example demonstrates configuring
cinderAPI
service options and enabling debugging on all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderAPI: customServiceConfig: | [DEFAULT] osapi_volume_workers = 3
For a listing of commonly used
cinderAPI
service option parameters, see API service option parameters.- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
3.5.1. API service option parameters
API service option parameters are provided for the configuration of the cinderAPI
portions of the Block Storage service.
Parameter | Description |
---|---|
|
Provides a value to determine if the API rate limit is enabled. The default is |
|
Provides a value to determine whether the logging level is set to |
|
Provides a value for the maximum number of items a collection resource returns in a single response. The default is |
| Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available. |
3.6. Configuring the scheduler service
The Block Storage service (cinder) has a scheduler service (cinderScheduler
) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations.
Red Hat recommends using only a single instance of cinderScheduler
for scheduling consistency and ease of troubleshooting. While cinderScheduler
can be run with multiple instances, the service default replicas: 1
is recommended.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the service down detection timeouts.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] report_interval = 20 1 service_down_time = 120 2
NoteRed Hat recommends configuring these values at the
cinder
level of the CR instead of thecinderScheduler
so that these values are applied to all components consistently.Edit the CR file and add the configuration for the statistics reporting interval.
The following example demonstrates configuring these values at the
cinder
level to apply them globally to all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120 1 backup_driver_stats_polling_interval = 120 2
The following example demonstrates configuring these values at the
cinderVolume
andcinderBackup
level to customize settings at the service level.apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderBackup: customServiceConfig: | [DEFAULT] backup_driver_stats_polling_interval = 120 1 < rest of the config > cinderVolumes: nfs: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120 2
NoteThe generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends.
Perform any additional configuration necessary to customize the
cinderScheduler
service.For more configuration options for the customization of the
cinderScheduler
service, see Scheduler service parameters.- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
3.6.1. Scheduler service parameters
Scheduler service parameters are provided for the configuration of the cinderScheduler
portions of the Block Storage service
Parameter | Description |
---|---|
|
Provides a setting for the logging level. When this parameter is |
|
Provides a setting for the maximum number of attempts to schedule a volume. The default is |
|
Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is |
|
Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is |
|
Provides a setting for a handler to use for selecting the host or pool after weighing. The value |
The following is an explanation of the filter class names from the parameter table:
AvailabilityZoneFilter
- Filters out all back ends that do not meet the availability zone requirements of the requested volume.
CapacityFilter
- Selects only back ends with enough space to accommodate the volume.
CapabilitiesFilter
- Selects only back ends that can support any specified settings in the volume.
InstanceLocality
- Configures clusters to use volumes local to the same node.
3.7. Configuring the volume service
The Block Storage service (cinder) has a volume service (cinderVolumes
section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots.
This service requires access to the storage back end (storage
) and storage management (storageMgmt
) networks in the networkAttachments
of the OpenStackControlPlane
CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access.
Volume service configuration is performed in the cinderVolumes
section with parameters set in the customServiceConfig
, customServiceConfigSecrets
, networkAttachments
, replicas
, and the nodeSelector
sections.
The volume service cannot have multiple replicas.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for your back end.
The following example demonstrates the service configuration for a Red Hat Ceph Storage back end:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderVolumes: ceph: 1 networkAttachments: 2 - storage customServiceConfig: | [ceph] volume_backend_name = ceph 3 volume_driver = cinder.volume.drivers.rbd.RBDDriver 4
- 1
- The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends.
- 2
- The configuration area for the back end network connections.
- 3
- The name assigned to this back end.
- 4
- The driver used to connect to this back end.
For a list of commonly used volume service parameters, see Volume service parameters.
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
3.7.1. Volume service parameters
Volume service parameters are provided for the configuration of the cinderVolumes
portions of the Block Storage service
Parameter | Description |
---|---|
|
Provides a setting for the availability zone of the back end. This is set in the |
| Provides a setting for the back end name for a given driver implementation. There is no default value. |
| Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value. |
|
Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a |
|
Provides a setting for a directory used for temporary storage during image conversion. The default value is |
|
Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is |
3.7.2. Block Storage service (cinder) back ends
Each Block Storage service back end should have an individual configuration section in the cinderVolumes
section. This ensures each back end runs in a dedicated pod. This approach has the following benefits:
- Increased isolation.
- Adding and removing back ends is fast and does not affect other running back ends.
- Configuration changes do not affect other running back ends.
- Automatically spreads the Volume pods into different nodes.
Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols. Storage protocol information should also be provided in individual vendor installation guides.
Red Hat recommends each back end be configured with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the recommended practice.
No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment.
All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog.
For more information on integrating and certifying vendor drivers, see Integrating partner content.
For information on Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a Hyperconverged Infrastructure environment.
For information on configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end.
Red Hat does not recommend the use of generic NFS driver for production environments.
3.7.3. Multiple Block Storage service (cinder) back ends
Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes
configuration section. Each back end runs in an independent pod.
The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi
3.8. Configuring back end availability zones
Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling.
For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware.
Post-deployment, AZs are created using the RESKEY:availability_zones
volume type extra specification.
Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the AZ configuration.
The following example demonstrates an AZ configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage - storageMgmt customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs backend_availability_zone=zone1 1 iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi backend_availability_zone=zone2
- 1
- The availability zone associated with the back end.
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
3.9. Configuring a generic NFS back end
The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups.
The Block Storage service supports a generic NFS solution with the following caveats:
- Red Hat recommends using a certified storage back end and driver. Red Hat does not recommend using NFS storage that comes from the generic NFS back end in a production environment. The capabilities of a generic NFS back end are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
- For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS.
RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the
customServiceConfig
in the specific back-end configuration with the following parameters:nas_secure_file_operation=false nas_secure_file_permissions=false
-
Do not configure the
nfs_mount_options
option. The default value is the best NFS options for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.
Procedure
Create a
Secret
CR to store the volume connection information.The following is an example of a
Secret
CR:apiVersion: v1 kind: Secret metadata: name: cinder-volume-nfs-secrets 1 type: Opaque stringData: cinder-volume-nfs-secrets: | [nfs] nas_host=192.168.130.1 nas_share_path=/var/nfs/cinder
- 1
- The name used when including it in the
cinderVolumes
back end configuration.
- Save the file.
Update the control plane:
$ oc apply -f <secret_file_name> -n openstack
-
Replace <secret_file_name> with the name of the file that contains your
Secret
CR.
-
Replace <secret_file_name> with the name of the file that contains your
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the generic NFS back end.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: 1 - storage customServiceConfig: | [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false customServiceConfigSecrets: - cinder-volume-nfs-secrets 2
NoteIf you are configuring multiple generic NFS back ends, ensure each is in an individual configuration section so that one pod is devoted to each back end.
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
3.10. Configuring an NFS conversion directory
When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the conversion directory.
The following example demonstrates a conversion directory configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: extraVol: - propagation: - CinderVolume volumes: - name: cinder-conversion nfs: path: <nfs_share_path> 1 server: <nfs_server> 2 mounts: - name: cinder-conversion mountPath: /var/lib/cinder/conversion readOnly: true
NoteThe example provided demonstrates how to create a common conversion directory used by all volume service pods.
It is also possible to define a conversion directory for each volume service pod. To do this, define each conversion directory using
extraMounts
as demonstrated above but in thecinder
section of theOpenStackControlPlane
CR file. You would then set thepropagation
value to the name of the specific Volume section instead ofCinderVolume
.- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstack
Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstack
The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
3.11. Configuring automatic database cleanup
The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources.
These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs.
Procedure
-
Open your
openstack_control_plane.yaml
file to edit theOpenStackControlPlane
CR. Add the
dbPurge
parameter to thecinder
template to configure database cleanup depending on the service you want to configure.The following is an example of using the
dbPurge
parameter to configure the Block Storage service:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: dbPurge: age: 20 1 schedule: 1 0 * * 0 2
Update the control plane:
$ oc apply -f openstack_control_plane.yaml
3.12. Preserving jobs
The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs.
If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob
field in your OpenStackControlPlane
CR to stop the automatic removal of jobs and preserve them.
Example:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: preserveJobs: true
3.13. Resolving hostname conflicts
Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN.
Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead.
These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0
* For backups: cinder-backup-<replica-number>
If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames
field.
When you set the uniquePodNames
field to true
, a short hash is added to the pod names, which addresses hostname conflicts.
Example:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: uniquePodNames: true
3.14. Using other container images
Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed using a container image for a specific release and version. There are times when a deployment requires a container image other than the one produced for that release and version. The most common reasons for this are:
- Deploying a hotfix.
- Using a certified, vendor-provided container image.
The container images used by the installer are controlled through the OpenStackVersion
CR. An OpenStackVersion
CR is automatically created by the openstack
operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane
CR but after the openstack
operator is installed. This allows for the container image for any service and component to be individually designated.
The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI
, cinderScheduler
, and cinderBackup
pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes
.
The following example demonstrates a OpenStackControlPlane
configuration with two back ends; one called ceph
and one called custom-fc
. The custom-fc
backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: ceph: networkAttachments: - storage < . . . > custom-fc: networkAttachments: - storage
The following example demonstrates what our OpenStackVersion
CR might look like in order to set up the container images properly.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderAPIImages: <custom-api-image> cinderBackupImages: <custom-backup-image> cinderSchedulerImages: <custom-scheduler-image> cinderVolumeImages: custom-fc: <vendor-volume-volume-image>
-
Replace
<custom-api-image>
with the name of the API service image to use. -
Replace
<custom-backup-image>
with the name of the Backup service image to use. -
Replace
<custom-scheduler-image>
with the name of the Scheduler service image to use. -
Replace
<vendor-volume-volume-image>
with the name of the certified, vendor-provided image to use.
The name
attribute in your OpenStackVersion
CR must match the same attribute in your OpenStackControlPlane
CR.