Configuring persistent storage
Configuring storage services for Red Hat OpenStack Services on OpenShift
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your feedback. Tell us how we can improve the documentation.
To provide documentation feedback for Red Hat OpenStack Services on OpenShift (RHOSO), create a Jira issue in the OSPRH Jira project.
Procedure
- Log in to the Red Hat Atlassian Jira.
- Click the following link to open a Create Issue page: Create issue
- Select Red Hat OpenStack Services on OpenShift as the Project.
- Select Bug as the Issue Type.
- Click Next.
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue.
- Select documentation as the Component.
- Click Create.
- Review the details of the bug you created.
Chapter 1. Configuring persistent storage Copy linkLink copied to clipboard!
When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), you can configure storage services for block, image, object, and file storage. You can configure Red Hat Ceph Storage as a unified back end for all storage services or you can configure alternative back-end storage solutions for these services.
1.1. Ephemeral and persistent storage Copy linkLink copied to clipboard!
RHOSO recognizes two types of storage - ephemeral and persistent:
- Ephemeral storage is associated with a specific Compute instance. When that instance is terminated, so is the associated ephemeral storage. This type of storage is useful for runtime requirements, such as storing the operating system of an instance.
- Persistent storage is designed to survive (persist) independent of any running instance. This storage is used for any data that needs to be reused, either by different instances or beyond the life of a specific instance.
RHOSO storage services correspond with the following persistent storage types:
- Block Storage service (cinder): Volumes
- Image service (glance): Images
- Object Storage service (swift): Objects
- Shared File Systems service (manila): Shares
All persistent storage services store data in a storage back end.
1.2. Supported persistent storage solutions Copy linkLink copied to clipboard!
RHOSO supports the following storage solutions for service back ends:
- Block Storage service (cinder): Ceph RBD, iSCSI, FC, NVMe-TCP, or NFS back end
- Image service (glance): Ceph RBD, Block Storage, Object Storage, or NFS back end
- Object Storage service (swift): PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes
- Shared File Systems service (manila): CephFS, CephFS-NFS, or alternative back ends such as NetApp or Pure Storage
For information about planning the storage solution and related requirements for your RHOSO deployment, for example, networking and security, see Planning storage and shared file systems in Planning your deployment.
1.3. Red Hat Ceph Storage Copy linkLink copied to clipboard!
Red Hat Ceph Storage can serve as a unified back end for all RHOSO storage services. The features and functionality of RHOSO services are optimized when you use Red Hat Ceph Storage as the storage back end.
- Supported Ceph versions and deployment modes
RHOSO supports external deployments of Red Hat Ceph Storage 7, 8, and 9. You can integrate an external Red Hat Ceph Storage cluster with the Compute service (nova) and one or more RHOSO storage services, or you can create a hyperconverged infrastructure (HCI) environment.
For information about creating a hyperconverged infrastructure (HCI) environment, see Deploying a hyperconverged infrastructure environment.
NoteConfiguration examples in procedures that reference Red Hat Ceph Storage use Release 7 information. If you are using a later version of Red Hat Ceph Storage, adjust the configuration examples accordingly.
- OpenShift Data Foundation integration
You can use Red Hat OpenShift Data Foundation (ODF) in external mode to integrate with Red Hat Ceph Storage. The use of ODF in internal mode is not supported.
For more information about deploying ODF in external mode, see Deploying OpenShift Data Foundation in external mode.
1.4. Storage back end certification Copy linkLink copied to clipboard!
To promote the use of best practices, Red Hat has a certification process for OpenStack back ends. For improved supportability and interoperability, ensure that your storage back end is certified for RHOSO. You can check certification status in the Red Hat Ecosystem Catalog. Ceph RBD is certified as a back end in all RHOSO releases.
Chapter 2. Mounting external files to provide configuration data Copy linkLink copied to clipboard!
You can use the extraMounts parameter to mount external files for configuration or authentication data in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. Use this parameter to distribute Red Hat Ceph Storage configuration files to services, access external NFS shares for temporary storage, or run storage back-end drivers on persistent filesystems.
Example scenarios include:
- Distributing Red Hat Ceph Storage cluster configuration and keyring files to Block Storage (cinder), Image (glance), and Compute (nova) services
- Accessing external NFS shares for temporary image storage when node disk space is consumed
- Configuring storage back-end drivers to run on persistent file systems to preserve data between reboots
The extraMounts parameter can be defined at the following levels:
-
Service - A Red Hat OpenStack Services on OpenShift (RHOSO) service such as
Glance,Cinder, orManila. -
Component - A component of a service such as
GlanceAPI,CinderAPI,CinderScheduler,ManilaShare,CinderBackup. -
Instance - An individual instance of a particular component. For example, your deployment could have two instances of the component
ManilaSharecalledshare1andshare2. An Instance level propagation represents the Pod associated to an instance that is part of the same Component type.
The propagation field is used to describe how the definition is applied. If the propagation field is not used, definitions propagate to every level below the level at which it is defined:
- Service level definitions propagate to Component and Instance levels.
- Component level definitions propagate to the Instance level.
The following is the general structure of an extraMounts definition:
extraMounts:
- name: <extramount-name>
region: <openstack-region>
extraVol:
- propagation:
- <location>
extraVolType: <Ceph | Nfs | Undefined>
volumes:
- <pod-volume-structure>
mounts:
- <pod-mount-structure>
-
nameis a string that names theextraMountsdefinition. This is for organizational purposes and cannot be referenced from other parts of the manifest. This is an optional attribute. -
regionis a string that defines the RHOSO region of theextraMountsdefinition. This is an optional attribute. -
propagationdescribes how the definition is applied. If thepropagationfield is not used, definitions propagate to every level below the level at which it is defined. This is an optional attribute. -
extraVolTypeis a string that assists the administrator in categorizing or labeling the group of mounts that belong to theextraVolentry of the list. There are no defined values for this parameter but the valuesCeph,Nfs, andUndefinedare common. This is an optional attribute. -
volumesis a list that defines Red Hat OpenShift volume sources. This field has the same structure as thevolumessection in a Pod. The structure is dependent on the type of volume being defined. The name defined in this section is used as a reference in themountssection. -
mountsis a list of mountpoints that represent the path where thevolumeSourceshould be mounted in the Pod. The name of a volume from thevolumessection is used as a reference as well as the path where it should be mounted. This attribute has the same structure as thevolumeMountsattribute for a Pod.
2.1. Mounting external files using the extraMounts attribute Copy linkLink copied to clipboard!
Configure the OpenStackControlPlane custom resource (CR) to access external data for configuration or authentication purposes in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, on your workstation. Add the
extraMountsattribute to theOpenStackControlPlaneCR service definition.The following example demonstrates adding the
extraMountsattribute:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: CephAdd the
propagationfield to specify where in the service definition theextraMountattribute applies.The following example adds the
propagationfield to the previous example:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: ... extraMounts: - name: v1 region: r1 extraVol: - propagation: - Glance extraVolType: CephThe
propagationfield can have one of the following values:Service level propagations:
-
Glance -
Cinder -
Manila -
Horizon -
Neutron
-
Component level propagations:
-
CinderAPI -
CinderScheduler -
CinderVolume -
CinderBackup -
GlanceAPI -
ManilaAPI -
ManilaScheduler -
ManilaShare -
NeutronAPI
-
Back-end propagation:
-
Any back-end in the
CinderVolume,ManilaShare, orGlanceAPImaps.
-
Any back-end in the
Define the volume sources:
The following example demonstrates adding the
volumesfield to the previous example to provide a Red Hat Ceph Storage secret to the Image service (glance):apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-fileswhere:
ceph- Is the Red Hat Ceph Storage secret name.
Define where the different volumes are mounted within the pod.
The following example demonstrates adding the
mountsfield to the previous example to provide the location and name of the file that contains the Red Hat Ceph Storage secret:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: truewhere:
"/etc/ceph"- Is the location of the secrets file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:
$ oc get pods -n openstackThe control plane is deployed when all the pods are either completed or running.
2.2. Mounting external files configuration examples Copy linkLink copied to clipboard!
The following configuration examples demonstrate how the extraMounts attribute is used to mount external files. The extraMounts attribute is defined at either the top level custom resource (spec) or the service definition.
- Dashboard service (horizon)
- This configuration example demonstrates using an external file to provide configuration to the Dashboard service.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
horizon:
enabled: true
template:
customServiceConfig: '# add your customization here'
extraMounts:
- extraVol:
- extraVolType: HorizonSettings
mounts:
- mountPath: /etc/openstack-dashboard/local_settings.d/_66_help_link.py
name: horizon-config
readOnly: true
subPath: _66_help_link.py
volumes:
- name: horizon-config
configMap:
name: horizon-config
- Red Hat Ceph Storage
- This configuration example defines the services that require access to the Red Hat Ceph Storage secret.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
extraMounts:
- name: v1
region: r1
extraVol:
- propagation:
- CinderVolume
- CinderBackup
- GlanceAPI
- ManilaShare
extraVolType: Ceph
volumes:
- name: ceph
secret:
secretName: ceph-conf-files
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
- Shared File Systems service (manila)
- This configuration example provides external configuration files to the Shared File Systems service so that it can connect to a Red Hat Ceph Storage back end.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
apiVersion: core.openstack.org/v1beta1
spec:
manila:
template:
ManilaShares:
share1:
...
extraMounts:
- name: v1
region: r1
extraVol:
- propagation:
- share1
extraVolType: Ceph
volumes:
- name: ceph
secret:
secretName: ceph-conf-files
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
- Image service (glance)
-
This configuration example connects three
glanceAPIinstances to a different Red Hat Ceph Storage back end. The instances;api0,api1, andapi2; are connected to three different Red Hat Ceph Storage clusters that are namedceph0,ceph1, andceph2.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
extraMounts:
- name: api0
region: r1
extraVol:
- propagation:
- api0
volumes:
- name: ceph0
secret:
secretName: <secret_name>
mounts:
- name: ceph0
mountPath: "/etc/ceph"
readOnly: true
- name: api1
region: r1
extraVol:
- propagation:
- api1
volumes:
- name: ceph1
secret:
secretName: <secret_name>
mounts:
- name: ceph1
mountPath: "/etc/ceph"
readOnly: true
- name: api2
region: r1
extraVol:
- propagation:
- api2
volumes:
- name: ceph2
secret:
secretName: <secret_name>
mounts:
- name: ceph2
mountPath: "/etc/ceph"
readOnly: true
Chapter 3. Integrating Red Hat Ceph Storage Copy linkLink copied to clipboard!
You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the Block Storage (cinder), Image (glance), Object Storage (swift), Compute (nova), and Shared File Systems (manila) services to the cluster.
To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks:
- Verify that Red Hat Ceph Storage is deployed and all the required services are running.
- Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster.
- Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster.
- Obtain the Ceph file system identifier.
- Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end.
- Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end.
3.1. Prerequisites Copy linkLink copied to clipboard!
- Access to a Red Hat Ceph Storage cluster.
- The RHOSO control plane is installed on an operational RHOSO cluster.
3.2. Creating Red Hat Ceph Storage pools Copy linkLink copied to clipboard!
Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster.
- Considerations
If you are deploying the NFS service for the Shared File Systems service (manila):
-
Do not select a custom port. Only the default NFS port of 2049 is supported, and you must enable the Red Hat Ceph Storage
ingressservice withingress-modeset tohaproxy-protocolwhen creating the NFS cluster. -
With Red Hat Ceph Storage 9, NFSv3 is not enabled by default. If you need NFSv3 support, you must include the
--enable-nfsv3parameter when creating the NFS cluster. -
For security in production environments, do not provide access to
0.0.0.0/0on shares to mount them on client machines.
-
Do not select a custom port. Only the default NFS port of 2049 is supported, and you must enable the Red Hat Ceph Storage
Prerequisites
- Run all commands in this procedure from a Red Hat Ceph Storage node that has access to the Ceph cluster.
When creating pools, set the appropriate placement group (PG) number based on expected usage and cluster size. For more information, see "Placement Groups" in the Red Hat Ceph Storage Storage Strategies Guide:
Procedure
Enter the
cephadmcontainer client:$ sudo cephadm shellCreate pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images):
$ for P in vms volumes images; do ceph osd pool create $P; ceph osd pool application enable $P rbd; doneIf you are using the Shared File Systems service, create the
cephfsvolume. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster:$ ceph fs volume create cephfsIf you are using the Shared File Systems service with CephFS-NFS, deploy an NFS service on the Red Hat Ceph Storage cluster:
If you are deploying Red Hat Ceph Storage 7 or 8, run the following command:
$ ceph nfs cluster create cephfs \ --ingress --virtual-ip=<vip> \ --ingress-mode=haproxy-protocolIf you are deploying Red Hat Ceph Storage 9, run the following command:
$ ceph nfs cluster create cephfs \ --ingress --virtual-ip=<vip> \ --ingress-mode=haproxy-protocol \ --enable-nfsv3-
Replace
<vip>with the IP address assigned to the NFS service. The NFS service should be on a dedicated network that isolates NFS traffic while allowing RHOSO users to attach their Compute instances to access shares.
-
Replace
Create a CephX key for RHOSO to use to access pools:
$ ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images'If you are using the Shared File Systems service, add
osdcaps for the CephFS data pool by using the following command instead:$ ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.data'
Export the CephX key:
$ ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyringExport the configuration file:
$ ceph config generate-minimal-conf > /etc/ceph/ceph.conf
3.3. Creating a Red Hat Ceph Storage secret Copy linkLink copied to clipboard!
Create a secret so that services can access the Red Hat Ceph Storage cluster.
The procedure examples use openstack as the name of the Red Hat Ceph Storage user. The file name in the Secret resource must match this user name.
For example, if the file name for the username openstack2 is /etc/ceph/ceph.client.openstack2.keyring, then the secret data line should be ceph.client.openstack2.keyring: $KEY.
Procedure
-
Transfer the
cephxkey and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in theopenstacknamespace. Base64 encode these files and store them in
KEYandCONFenvironment variables:$ KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0) $ CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)-
Create a YAML file to create the
Secretresource. Using the environment variables, add the
Secretconfiguration to the YAML file:apiVersion: v1 data: ceph.client.openstack.keyring: $KEY ceph.conf: $CONF kind: Secret metadata: name: ceph-conf-files namespace: openstack type: Opaque- Save the YAML file.
Create the
Secretresource:$ oc create -f <secret_configuration_file>-
Replace
<secret_configuration_file>with the name of the YAML file you created.
-
Replace
3.4. Obtaining the Red Hat Ceph Storage file system identifier Copy linkLink copied to clipboard!
The Red Hat Ceph Storage file system identifier (FSID) is a unique identifier for the cluster. Use the FSID to configure and verify cluster interoperability with Red Hat OpenStack Services on OpenShift (RHOSO).
Procedure
Extract the FSID from the Red Hat Ceph Storage secret:
$ FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')
3.5. Configuring the control plane to use the Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster. This process includes confirming network configuration, configuring the control plane to use the Red Hat Ceph Storage secret, and setting up Image (glance), Block Storage (cinder), and optionally Shared File Systems (manila) services.
This example does not include configuring Block Storage backup service (cinder-backup) with Red Hat Ceph Storage.
Procedure
Check the storage interface defined in your
NodeNetworkConfigurationPolicy(nncp) custom resource to confirm that it has the same network configuration as thepublic_networkof the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through theStoragenetwork. TheStoragenetwork should have the same network configuration as thepublic_networkof the Red Hat Ceph Storage cluster.It is not necessary for RHOSO to access the
cluster_networkof the Red Hat Ceph Storage cluster.NoteIf it does not impact workload performance, the
Storagenetwork can be different from the external Red Hat Ceph Storage clusterpublic_networkusing routed (L3) connectivity as long as the appropriate routes are added to theStoragenetwork to reach the external Red Hat Ceph Storage clusterpublic_network.Check the
networkAttachmentsfor the default Image service instance in theOpenStackControlPlaneCR to confirm that the default Image service is configured to access theStoragenetwork:glance: enabled: true template: databaseInstance: openstack storage: storageRequest: 10G glanceAPIs: default replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage-
Confirm the Block Storage service is configured to access the
Storagenetwork through MetalLB. -
Optional: Confirm the Shared File Systems service is configured to access the
Storagenetwork through ManilaShare. -
Confirm the Compute service (nova) is configured to access the
Storagenetwork. -
Confirm the Red Hat Ceph Storage configuration file,
/etc/ceph/ceph.conf, contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within theStoragenetwork IP address range. -
Open your
openstack_control_plane.yamlfile to edit theOpenStackControlPlaneCR. Add the
extraMountsparameter to define the services that require access to the Red Hat Ceph Storage secret.The following is an example of using the
extraMountsparameter for this purpose. Only includeManilaSharein the propagation list if you are using the Shared File Systems service (manila):apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: <ceph-conf-files> mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true-
Replace
<ceph-conf-files>with the name of your Secret CR created in Creating a Red Hat Ceph Storage secret.
-
Replace
Add the
customServiceConfigparameter to theglancetemplate to configure the Image service to use the Red Hat Ceph Storage cluster:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = <backend_name>:rbd [glance_store] default_backend = <backend_name> [<backend_name>] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseAccount: glance secret: osp-secret storage: storageRequest: 10G extraMounts: - name: v1 region: r1 extraVol: - propagation: - GlanceAPI extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: trueReplace
<backend_name>with the name of the default back end.When you use Red Hat Ceph Storage as a back end for the Image service,
image-conversionis enabled by default. For more information, see Planning storage and shared file systems in Planning your deployment.
Add the
customServiceConfigparameter to thecindertemplate to configure the Block Storage service to use the Red Hat Ceph Storage cluster. For information about using Block Storage backups, see Configuring the Block Storage backup service.apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... cinder: template: cinderVolumes: ceph: customServiceConfig: | [DEFAULT] enabled_backends=ceph [ceph] volume_backend_name=ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False rbd_secret_uuid=<$FSID>-
Replace
<$FSID>with the actual FSID. The FSID itself does not need to be considered secret. For more information, see Obtaining the Red Hat Ceph Storage FSID.
-
Replace
Optional: Add the
customServiceConfigparameter to themanilatemplate to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila).The following example exposes native CephFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=cephfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfs [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFSThe following example exposes CephFS with NFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=nfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfsnfs [cephfsnfs] driver_handles_share_servers=False share_backend_name=cephfsnfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfsApply the updates to the
OpenStackControlPlaneCR:$ oc apply -f openstack_control_plane.yaml
3.6. Configuring the data plane to use the Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
Configure the data plane to use the Red Hat Ceph Storage cluster.
Procedure
Create a
ConfigMapwith additional content for the Compute service (nova) configuration file/etc/nova/nova.conf.d/inside thenova_computecontainer. This additional content directs the Compute service to use Red Hat Ceph Storage RBD.apiVersion: v1 kind: ConfigMap metadata: name: ceph-nova data: <03-ceph-nova.conf>: | [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=<backend_name> images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=<$FSID>-
Replace
<03-ceph-nova.conf>with your file name. This file name must follow the naming convention of##-<name>-nova.conf. Files are evaluated by the Compute service alphabetically. A filename that starts with01will be evaluated by the Compute service before a filename that starts with02. When the same configuration option occurs in multiple files, the last one read wins. -
Replace
<backend_name>with the name of the back end specified in theglancetemplate of theOpenStackControlPlaneCR. -
Replace
<$FSID>with the actualFSID, as described in the Obtaining the Ceph FSID section. TheFSIDitself does not need to be considered secret.
-
Replace
Create a custom version of the default
novaservice to use the newConfigMap, which in this case is calledceph-nova.apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ceph spec: caCerts: combined-ca-bundle edpmServiceType: nova dataSources: - configMapRef: name: ceph-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova-
The custom service is named
nova-custom-ceph. It cannot be namednovabecausenovais an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation.
-
The custom service is named
Apply the
ConfigMapand custom service changes:$ oc create -f ceph-nova.yamlIn your
OpenStackDataPlaneNodeSetCR, update the list of services by adding theceph-clientservice and replacing the defaultnovaservice with the new custom service, for examplenova-custom-ceph. Add theextraMountsparameter to define access to the Ceph Storage secret.Example:
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: ... services: - redhat - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ceph-client - ovn - neutron-metadata - libvirt - nova-custom-ceph - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true-
You must add the
ceph-clientservice before theovn,libvirt, andnova-custom-cephservices in the list of services. Theceph-clientservice configures data plane nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files. This example might not list all of the services in your environment. You can run the following command to verify the list of services in your environment:
$ oc get -n openstack crd/openstackdataplanenodesets.dataplane.openstack.org -o yaml |yq -r '.spec.versions.[].schema.openAPIV3Schema.properties.spec.properties.services.defaultFor more information, see Data plane services.
-
You must add the
- Save the changes to the services list.
Create an
OpenStackDataPlaneDeploymentCR:$ oc create -f <dataplanedeployment_cr_file>Replace
<dataplanedeployment_cr_file>with the name of your file.The Ansible job for the
nova-custom-cephservice copies overrides from theConfigMapto the Compute service hosts. The Ansible job also usesvirsh secret-*commands so thelibvirtservice retrieves thecephxsecret byFSID.
Verification
Run the following command outside of a
nova_computecontainer to confirm the results of the Ansible job:$ sudo virsh secret-get-value $FSID
3.7. Configuring an external Ceph Object Gateway for storage Copy linkLink copied to clipboard!
You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end. You use the openstack client tool to configure the Object Storage service.
Procedure
- Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
- Deploy and configure a RGW service to handle object storage requests.
3.7.1. Configuring RGW authentication Copy linkLink copied to clipboard!
You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
Prerequisites
- You have deployed an operational OpenStack control plane.
Procedure
Create the Object Storage service on the control plane:
$ openstack service create --name swift --description "OpenStack Object Storage" object-storeCreate a user called
swift:$ openstack user create --project service --password <swift_password> swift-
Replace
<swift_password>with the password to assign to theswiftuser.
-
Replace
Create roles for the
swiftuser:$ openstack role create swiftoperator $ openstack role create ResellerAdminAdd the
swiftuser to system roles:$ openstack role add --user swift --project service member $ openstack role add --user swift --project service adminExport the RGW endpoint IP addresses to variables and create control plane endpoints:
$ export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage> $ export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external> $ openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s; $ openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;-
Replace
<rgw_endpoint_ip_address_storage>with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW. Replace
<rgw_endpoint_ip_address_external>with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW.NoteBoth endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by
haproxyandkeepalived, used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service.
-
Replace
Add the
swiftoperatorrole to the control planeadmingroup:$ openstack role add --project admin --user admin swiftoperator
3.7.2. Configuring and deploying the RGW service Copy linkLink copied to clipboard!
Configure and deploy a RGW service to handle object storage requests.
Procedure
- Log in to a Red Hat Ceph Storage Controller node.
Create a file called
/tmp/rgw_spec.yamland add the RGW deployment parameters:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - <host_1> - <host_2> ... - <host_n> networks: - <storage_network> spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default --- service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_ips_list: - <storage_network_vip> - <external_network_vip> virtual_interface_networks: - <storage_network>-
Replace
<host_1>,<host_2>, …,<host_n>with the name of the Ceph nodes where the RGW instances are deployed. -
Replace
<storage_network>with the network range used to resolve the interfaces whereradosgwprocesses are bound. -
Replace
<storage_network_vip>with the virtual IP (VIP) used as thehaproxyfront end. This is the same address configured as the Object Storage service endpoint ($RGW_ENDPOINT) in the Configuring RGW authentication procedure. -
Optional: Replace
<external_network_vip>with an additional VIP on an external network to use as thehaproxyfront end. This address is used to connect to RGW from an external network.
-
Replace
- Save the file.
Enter the cephadm shell and mount the
rgw_spec.yamlfile.$ cephadm shell -m /tmp/rgw_spec.yamlAdd RGW related configuration to the cluster:
$ ceph config set global rgw_keystone_url "https://<keystone_endpoint>" $ ceph config set global rgw_keystone_verify_ssl false $ ceph config set global rgw_keystone_api_version 3 $ ceph config set global rgw_keystone_accepted_roles "member, Member, admin" $ ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator" $ ceph config set global rgw_keystone_admin_domain default $ ceph config set global rgw_keystone_admin_project service $ ceph config set global rgw_keystone_admin_user swift $ ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD" $ ceph config set global rgw_keystone_implicit_tenants true $ ceph config set global rgw_s3_auth_use_keystone true $ ceph config set global rgw_swift_versioning_enabled true $ ceph config set global rgw_swift_enforce_content_length true $ ceph config set global rgw_swift_account_in_url true $ ceph config set global rgw_trust_forwarded_https true $ ceph config set global rgw_max_attr_name_len 128 $ ceph config set global rgw_max_attrs_num_in_req 90 $ ceph config set global rgw_max_attr_size 1024-
Replace
<keystone_endpoint>with the Identity service internal endpoint. The data plane nodes can resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be eitherhttp://orhttps://. -
Replace
<swift_password>with the password assigned to the swift user in the previous step.
-
Replace
Deploy the RGW configuration using the Orchestrator:
$ ceph orch apply -i /mnt/rgw_spec.yaml
3.8. Configuring RGW with TLS for an external Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
Configure RGW with TLS so that control plane services can resolve external Red Hat Ceph Storage cluster host names. This procedure configures Ceph RGW to emulate the Object Storage service (swift).
In this procedure, you configure the following:
-
A DNS zone and certificate so that a URL such as
https://rgw-external.ceph.local:8080is registered as an Identity service (keystone) endpoint, and {rhos_log} can securely access the HTTPS endpoint. -
A
DNSDatadomain, for exampleceph.localso that pods can map host names to IP addresses for services that are not hosted on RHOCP. -
DNS forwarding for the domain with the
CoreDNSservice. - A certificate by using the RHOSO public root certificate authority.
You must copy the certificate and key file created in RHOCP to the nodes hosting RGW so they can become part of the Ceph Orchestrator RGW specification.
- Considerations
-
DNSData custom resource: Creating a
DNSDataCR creates a newdnsmasqpod that is able to read and resolve the DNS information in the associatedDNSDataCR. -
Certificate authority: The certificate
issuerRefis set to the root certificate authority (CA) of RHOSO. This CA is automatically created when the control plane is deployed. The default name of the CA isrootca-public. The RHOSO pods trust this new certificate because the root CA is used.
-
DNSData custom resource: Creating a
Procedure
Create a
DNSDatacustom resource (CR) for the external Ceph cluster.Example
DNSDataCR:apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: labels: component: ceph-storage service: ceph name: ceph-storage namespace: openstack spec: dnsDataLabelSelectorValue: dnsdata hosts: - hostnames: - ceph-rgw-internal-vip.ceph.local ip: <172.18.0.2> - hostnames: - ceph-rgw-external-vip.ceph.local ip: <10.10.10.2>-
Replace
<172.18.0.2>with the correct host for your environment. In this example, the host at the IP address172.18.0.2hosts the Ceph RGW endpoint for access on the private storage network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host nameceph-rgw-internal-vip.ceph.local. -
Replace
<10.10.10.2>with the correct host for your environment. In this example, the host at the IP address10.10.10.2hosts the Ceph RGW endpoint for access on the external network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host nameceph-rgw-external-vip.ceph.local.
-
Replace
Apply the CR to your environment:
$ oc apply -f <ceph_dns_yaml>-
Replace
<ceph_dns_yaml>with the name of theDNSDataCR file.
-
Replace
-
Update the
CoreDNSCR to configure DNS forwarding to thednsmasqservice for requests to theceph.localdomain. For more information about DNS forwarding, see Using DNS forwarding in the RHOCP Networking guide. List the
openstackdomain DNS cluster IP address:$ oc get svc dnsmasq-dnsExample output:
$ oc get svc dnsmasq-dns dnsmasq-dns LoadBalancer 10.217.5.130 192.168.122.80 53:30185/UDP 160m- Record the DNS cluster IP address from the command output for DNS forwarding.
List the
CoreDNSCR:$ oc -n openshift-dns describe dns.operator/defaultEdit the
CoreDNSCR and add theserversconfiguration to thespecsection with the DNS cluster IP address.Example
CoreDNSCR updated with the DNS cluster IP address:apiVersion: operator.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2024-03-25T02:49:24Z" finalizers: - dns.operator.openshift.io/dns-controller generation: 3 name: default resourceVersion: "164142" uid: 860b0e61-a48a-470e-8684-3b23118e6083 spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53 name: ceph zones: - ceph.local upstreamResolvers: policy: Sequential upstreams: - port: 53 type: SystemResolvConfwhere:
servers- Defines DNS forwarding configurations for specific domains.
upstreams- Specifies the DNS cluster IP address to which DNS queries are forwarded.
10.217.5.130:53-
Is the DNS cluster IP address recorded from the
oc get svc dnsmasq-dnscommand. zones- Defines the domain for which DNS queries are forwarded to the upstream server.
Create a
CertificateCR with the host names from theDNSDataCR.Example
CertificateCR:apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: cert-ceph-rgw namespace: openstack spec: duration: 43800h0m0s issuerRef: {'group': 'cert-manager.io', 'kind': 'Issuer', 'name': 'rootca-public'} secretName: cert-ceph-rgw dnsNames: - ceph-rgw-internal-vip.ceph.local - ceph-rgw-external-vip.ceph.localApply the CR to your environment:
$ oc apply -f <ceph_cert_yaml>-
Replace
<ceph_cert_yaml>with the name of theCertificateCR file.
-
Replace
Extract the certificate and key data from the secret created when the
CertificateCR was applied:$ oc get secret <ceph_cert_secret_name> -o yamlReplace
<ceph_cert_secret_name>with the name used in thesecretNamefield of yourCertificateCR.Example output:
[stack@osp-storage-04 ~]$ oc get secret cert-ceph-rgw -o yaml apiVersion: v1 data: ca.crt: <CA> tls.crt: <b64cert> tls.key: <b64key> kind: Secret-
The
<b64cert>and<b64key>values are the base64-encoded certificate and key strings that you must use in the next step.
-
The
Extract and base64 decode the certificate and key information obtained in the previous step.
Extract and decode the certificate:
$ oc get secret <ceph_cert_secret_name> -o yaml | awk '/tls.crt/ {print $2}' | base64 -dExtract and decode the key:
$ oc get secret <ceph_cert_secret_name> -o yaml | awk '/tls.key/ {print $2}' | base64 -dIf you are using Red Hat Ceph Storage 7 or 8, concatenate the decoded certificate and key values with no spaces in between, and save them in the Ceph Object Gateway service specification.
The
rgwsection of the specification file looks like the following:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - host1 - host2 networks: - 172.18.0.0/24 spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default ssl: true rgw_frontend_ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----The
ingresssection of the specification file looks like the following:service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_interface_networks: - 172.18.0.0/24 virtual_ip: 172.18.0.2/24 ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----where:
rgw_frontend_ssl_certificate-
Contains the base64 decoded values from both
<b64cert>and<b64key>in the previous step with no spaces in between. ssl_cert-
Contains the base64 decoded values from both
<b64cert>and<b64key>in the previous step with no spaces in between.
If you are using Red Hat Ceph Storage 9, save the decoded certificate and key values separately in the Ceph Object Gateway service specification.
The
rgwsection of the specification file looks like the following:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - host1 - host2 networks: - 172.18.0.0/24 spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default ssl: true certificate_source: inline ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- ssl_key: | -----BEGIN PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END PRIVATE KEY-----The
ingresssection of the specification file looks like the following:service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_interface_networks: - 172.18.0.0/24 virtual_ip: 172.18.0.2/24 ssl: true certificate_source: inline ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----END CERTIFICATE----- ssl_key: | -----BEGIN PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END PRIVATE KEY-----where:
certificate_source: inline- Specifies that the certificate and key are embedded directly in the specification.
ssl_cert-
Contains the base64 decoded certificate value from
<b64cert>in the previous step. ssl_keyContains the base64 decoded key value from
<b64key>in the previous step.NoteIn Red Hat Ceph Storage 9, the
rgw_frontend_ssl_certificatefield, which required concatenated certificate and key values, is deprecated. New deployments must use the separatessl_certandssl_keyfields.
Use the procedure "Deploying the Ceph Object Gateway using the service specification" to deploy Ceph RGW with SSL. For more information, see the Red Hat Ceph Storage Operations Guide:
-
Connect to the
openstackclientpod. Verify that DNS forwarding has been successfully configured.
$ curl --trace - <host_name>Replace
<host_name>with the name of the external host previously added to theDNSDataCR.Example output:
sh-5.1$ curl https://rgw-external-vip.ceph.local:8080 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult> .1$ sh-5.1$-
In this example, the
openstackclientpod successfully resolved the host name, and no SSL verification errors were encountered.
3.9. Enabling deferred deletion for volumes or images with dependencies Copy linkLink copied to clipboard!
Enable deferred deletion in the Ceph RBD Clone v2 API to delete volumes or images with dependencies. The volume or image is removed from the service but stored in a Ceph RBD trash area until dependencies are resolved. The volume or image is only deleted from Ceph RBD when there are no dependencies.
The trash area maintained by deferred deletion does not provide restoration functionality. When volumes or images are moved to the trash area, they cannot be recovered or restored. The trash area serves only as a holding mechanism for the volume or image until all dependencies have been removed. The volume or image will be permanently deleted once no dependencies exist.
- Limitations
- When you enable Clone v2 deferred deletion in existing environments, the feature only applies to new volumes or images.
Procedure
Verify which Ceph version the clients in your Ceph Storage cluster are running:
$ cephadm shell -- ceph osd get-require-min-compat-clientExample output:
luminousTo set the cluster to use the Clone v2 API and the deferred deletion feature by default, set
min-compat-clienttomimic. Only clients in the cluster that are running Ceph version 13.2.x (Mimic) can access images with dependencies:$ cephadm shell -- ceph osd set-require-min-compat-client mimicSchedule an interval for
trash purgein minutes by using themsuffix:$ rbd trash purge schedule add --pool <pool> <30m>-
Replace
<pool>with the name of the associated storage pool, for example,volumesin the Block Storage service. -
Replace
<30m>with the interval in minutes that you want to specify fortrash purge.
-
Replace
Verify a trash purge schedule has been set for the pool:
$ rbd trash purge schedule list --pool <pool>
3.10. Troubleshooting Red Hat Ceph Storage RBD integration Copy linkLink copied to clipboard!
If Compute (nova), Block Storage (cinder), or Image (glance) service integration with Red Hat Ceph Storage RBD fails, use this incremental troubleshooting procedure. This example focuses on Image service integration, but you can adapt it for other services.
If you discover the cause of your issue before completing this procedure, it is not necessary to do any subsequent steps. You can exit this procedure and resolve the issue.
Procedure
Determine if any parts of the control plane are not properly deployed by assessing whether the
Readycondition is notTrue:$ oc get -n openstack OpenStackControlPlane \ -o jsonpath="{range .items[0].status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"If you identify a service that is not properly deployed, check the status of the service.
The following example checks the status of the Compute service:
$ oc get -n openstack Nova/nova \ -o jsonpath="{range .status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"You can check the status of all deployed services:
$ oc get pods -n openstackYou can check the logs of a specific service:
$ oc logs -n openstack <service_pod_name>-
Replace
<service_pod_name>with the name of the service pod you want to check.
-
Replace
If you identify an operator that is not properly deployed, check the status of the operator:
$ oc get pods -n openstack-operators -lopenstack.org/operator-nameYou can check the operator logs:
$ oc logs -n openstack-operators -lopenstack.org/operator-name=<operator_name>
Check the
Statusof the data plane deployment:$ oc get -n openstack OpenStackDataPlaneDeploymentIf the
Statusof the data plane deployment isFalse, check the logs of the associated Ansible job:$ oc logs -n openstack job/<ansible_job_name>-
Replace
<ansible_job_name>with the name of the associated job. The job name is listed in theMessagefield of theoc get -n openstack OpenStackDataPlaneDeploymentcommand output.
-
Replace
Check the
Statusof the data plane node set deployment:$ oc get -n openstack OpenStackDataPlaneNodeSetIf the
Statusof the data plane node set deployment isFalse, check the logs of the associated Ansible job:$ oc logs -n openstack job/<ansible_job_name>-
Replace
<ansible_job_name>with the name of the associated job. It is listed in theMessagefield of theoc get -n openstack OpenStackDataPlaneNodeSetcommand output.
-
Replace
If any pods are in the
CrashLookBackOffstate, you can duplicate them for troubleshooting purposes:$ oc debug <pod_name>-
Replace
<pod_name>with the name of the pod to duplicate.
-
Replace
Optional: You can route traffic to the duplicate pod during the debug process:
$ oc debug <pod_name> --keep-labels=trueOptional: You can use the
oc debugcommand in the following object debugging activities:-
To run
/bin/shon a container other than the first one, the command’s default behavior, using the command formoc debug -container <pod_name> <container_name>. This is useful for pods like the API where the first container is tailing a file and the second container is the one you want to debug. If you use this command form, you must first use the commandoc get pods | grep <search_string>to find the container name. -
To create any resource that creates pods such as
Deployments,StatefulSets, andNodes, use the command formoc debug <resource_type>/<resource_name>. An example of creating aStatefulSetwould beoc debug StatefulSet/cinder-scheduler.
-
To run
Connect to the pod and confirm that the
ceph.client.openstack.keyringandceph.conffiles are present in the/etc/cephdirectory.$ oc rsh <pod_name>-
Replace
<pod_name>with the name of the applicable pod. -
If the Ceph configuration files are missing, check the
extraMountsparameter in yourOpenStackControlPlaneCR.
-
Replace
Confirm the pod has a network connection to the Red Hat Ceph Storage cluster by connecting to the IP and port of a Ceph Monitor from the pod. The IP and port information is located in
/etc/ceph.conf.The following is an example of this process:
$ oc get pods | grep glance | grep external-api-0 glance-06f7a-default-external-api-0 3/3 Running 0 2d3h $ oc debug --container glance-api glance-06f7a-default-external-api-0 Starting pod/glance-06f7a-default-external-api-0-debug-p24v9, command was: /usr/bin/dumb-init --single-child -- /bin/bash -c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start Pod IP: 192.168.25.50 If you don't see a command prompt, try pressing enter. sh-5.1# cat /etc/ceph/ceph.conf # Ansible managed [global] fsid = 63bdd226-fbe6-5f31-956e-7028e99f1ee1 mon host = [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] [client.libvirt] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log sh-5.1# python3 Python 3.9.19 (main, Jul 18 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> s = socket.socket() >>> ip="192.168.122.100" >>> port=3300 >>> s.connect((ip,port)) >>>Optional: If you cannot connect to a Ceph Monitor, troubleshoot the network connection between the cluster and pod. The previous example uses a Python socket to connect to the IP and port of the Red Hat Ceph Storage cluster from the
ceph.conffile.There are two potential outcomes from the execution of the
s.connect((ip,port))function:- If the command executes successfully and there is no error similar to the following example, the network connection between the pod and cluster is functioning correctly. If the connection is functioning correctly, the command execution will provide no return value at all.
If the command takes a long time to execute and returns an error similar to the following example, the network connection between the pod and cluster is not functioning correctly. It should be investigated further to troubleshoot the connection.
Traceback (most recent call last): File "<stdin>", line 1, in <module> TimeoutError: [Errno 110] Connection timed out
Examine the
cephxkey as shown in the following example:bash-5.1$ cat /etc/ceph/ceph.client.openstack.keyring [client.openstack] key = "<redacted>" caps mgr = allow * caps mon = profile rbd caps osd = profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images bash-5.1$List the contents of a pool from the
caps osdparameter as shown in the following example:$ /usr/bin/rbd --conf /etc/ceph/ceph.conf \ --keyring /etc/ceph/ceph.client.openstack.keyring \ --cluster ceph --id openstack \ ls -l -p <pool_name> | wc -l-
Replace
<pool_name>with the name of the required Red Hat Ceph Storage pool. -
If this command returns the number 0 or greater, the
cephxkey provides adequate permissions to connect to, and read information from, the Red Hat Ceph Storage cluster. -
If this command does not complete but network connectivity to the cluster was confirmed, work with the Ceph administrator to obtain the correct
cephxkeyring. -
Check if there is an MTU mismatch on the Storage network. If the network is using jumbo frames (an MTU value of 9000), all switch ports between servers using the interface must be updated to support jumbo frames. If this change is not made on the switch, problems can occur at the Ceph application layer. Verify all hosts using the network can communicate at the desired MTU with a command such as
ping -M do -s 8972 <ip_address>.
-
Replace
Send test data to the
imagespool on the Ceph cluster.The following is an example of performing this task:
# DATA=$(date | md5sum | cut -c-12) # POOL=images # RBD="/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack" # $RBD create --size 1024 $POOL/$DATATipIt is possible to be able to read data from the cluster but not have permissions to write data to it, even if write permission was granted in the
cephxkeyring. If you have write permissions, but you cannot write data to the cluster, the cluster might be overloaded and not able to write new data.In the example, the
rbdcommand did not complete successfully and was canceled. It was subsequently confirmed the cluster itself did not have the resources to write new data. The issue was resolved on the cluster itself. There was nothing incorrect with the client configuration.
3.11. Troubleshooting Red Hat Ceph Storage clients Copy linkLink copied to clipboard!
Put Red Hat OpenStack Services on OpenShift (RHOSO) Ceph clients in debug mode to troubleshoot their operation.
Procedure
- Locate the Red Hat Ceph Storage configuration file mapped in the Red Hat OpenShift secret created in Creating a Red Hat Ceph Storage secret.
Modify the contents of the configuration file to include troubleshooting-related configuration.
The following is an example of troubleshooting-related configuration:
[client.openstack] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/guest-$pid.log debug ms = 1 debug rbd = 20 log to file = trueNoteFor more information about troubleshooting, see the Red Hat Ceph Storage Troubleshooting Guide:
- Update the secret with the new content.
3.12. Customizing Red Hat Ceph Storage configurations Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7, 8, and 9. For information about customizing and managing Ceph Storage, see the documentation sets for your Ceph Storage version.
For complete documentation, see:
Chapter 4. Configuring the Block Storage service (cinder) Copy linkLink copied to clipboard!
You can use the Block Storage service (cinder) to access remote block storage devices through volumes for persistent storage. The service has three mandatory components, api, scheduler, and volume, and one optional component, backup.
As a security hardening measure, the Block Storage services run as the cinder user.
All Block Storage services use the cinder section of the OpenStackControlPlane custom resource (CR) for their configuration:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
Global configuration options are applied directly under the cinder and template sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
<global-options>
template:
<global-options>
cinderAPI:
<cinder-api-options>
cinderScheduler:
<cinder-scheduler-options>
cinderVolumes:
<name1>: <cinder-volume-options>
<name2>: <cinder-volume-options>
cinderBackup:
<cinder-backup-options>
4.1. Block Storage service terminology and definitions Copy linkLink copied to clipboard!
The following terms are important to understanding the Block Storage service (cinder):
- Storage back end: A physical storage system where volume data is stored.
-
Cinder driver: The part of the Block Storage service that enables communication with the storage back end. It is configured with the
volume_driverandbackup_driveroptions. -
Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the
volume_backend_nameoption. - Storage pool: A logical grouping of volumes in a given storage back end.
- Cinder pool: A representation in the Block Storage service of a storage pool.
-
Volume host: The way the Block Storage service address volumes. There are two different representations, short (
<hostname>@<backend-name>) and full (<hostname>@<backend-name>#<pool-name>). - Quota: Limits defined per project to constrain the use of Block Storage specific resources.
4.2. Block Storage service (cinder) enhancements in RHOSO Copy linkLink copied to clipboard!
The following functionality enhancements have been integrated into the Block Storage service:
- Ease of deployment for multiple volume back ends.
- Back end deployment does not affect running volume back ends.
- Back end addition and removal does not affect running back ends.
- Back end configuration changes do not affect other running back ends.
- Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers.
- Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality.
- Improved methods for troubleshooting the service code.
4.3. Configuring transport protocols Copy linkLink copied to clipboard!
You can use iSCSI, Fibre Channel, NVMe-TCP, NFS, and Red Hat Ceph Storage RBD transport protocols with the Block Storage service (cinder). Control plane services that use volumes might require iscsid and multipathd modules on RHOCP cluster nodes, configured by using a MachineConfig CR.
Using a MachineConfig CR to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a MachineConfig CR to ensure the integrity of RHOCP workloads.
For more information about MachineConfig, see Understanding the Machine Config operator. The procedures in this section provide a general configuration of these protocols and are not vendor-specific. If your deployment requires multipathing, see Configuring multipathing.
The Block Storage volume and backup services are automatically started on data plane nodes.
4.3.1. Configuring the iSCSI protocol for volume storage Copy linkLink copied to clipboard!
Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig to the applicable nodes to configure nodes to use the iSCSI protocol.
If the iscsid service module is already running, this procedure is not required.
Procedure
Create a
MachineConfigCR to configure the nodes for theiscsidmodule.The following example starts the
iscsidservice with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service- Save the file.
Apply the
MachineConfigCR file.$ oc apply -f <machine_config_file> -n openstack-
Replace
<machine_config_file>with the name of yourMachineConfigCR file.
-
Replace
4.3.2. Configuring the Fibre Channel protocol Copy linkLink copied to clipboard!
There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end.
4.3.3. Configuring the NVMe-TCP protocol for volume storage Copy linkLink copied to clipboard!
Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme kernel modules.
Procedure
Create a
MachineConfigCR to configure the nodes for thenvmekernel modules.The following example starts the
nvmekernel modules with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false mode: 420 user: name: root group: name: root contents: source: data:,nvme-fabrics%0Anvme-tcp- Save the file.
Apply the
MachineConfigCR file.$ oc apply -f <machine_config_file> -n openstack-
Replace
<machine_config_file>with the name of yourMachineConfigCR file.
-
Replace
After the nodes have rebooted, verify the
nvme-fabricsmodule are loaded and support ANA on a host:cat /sys/module/nvme_core/parameters/multipathNoteEven though ANA does not use the Linux Multipathing Device Mapper,
multipathdmust be running for the Compute nodes to be able to use multipathing when connecting volumes to instances.
4.4. LVM device management Copy linkLink copied to clipboard!
When you use Logical Volume Management (LVM) with Block Storage service (cinder) back ends, Red Hat OpenStack Services on OpenShift (RHOSO) automatically enables device filtering through the RHEL system.devices file. LVM device filtering prevents Block Storage service volumes from being scanned by LVM on data plane nodes.
For more information about the RHEL system.devices file, see The LVM devices file in the RHEL documentation for Configuring and managing logical volumes.
4.5. Configuring multipathing for Block Storage volumes Copy linkLink copied to clipboard!
Configure multipathing in Red Hat OpenStack Services on OpenShift (RHOSO) to create redundancy or improve performance. Control plane nodes require a MachineConfig CR. Data plane nodes have default multipath configuration, but you must add vendor-specific parameters for production environments.
4.5.1. Configuring multipathing on control plane nodes Copy linkLink copied to clipboard!
You can configure multipathing on Red Hat OpenShift Container Platform (RHOCP) control plane nodes by creating a MachineConfig custom resource (CR) that creates a multipath configuration file and starts the service.
In Red Hat OpenStack Services on OpenShift (RHOSO) deployments, the use_multipath_for_image_xfer configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.
The example in this procedure implements a minimal multipath configuration file, which configures the default multipath parameters. However, your production deployment might also require vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment.
If you have a complex multipath configuration, you can use the Butane command-line utility to create a multipath configuration file for you. For more information, see Creating machine configs with Butane in RHOCP Installation configuration.
Procedure
Create a
MachineConfigCR to create a multipath configuration file and to start themultipathdmodule on all control plane nodes.The following example creates a
MachineConfigCR named99-worker-cinder-enable-multipathdthat implements a multipath configuration file namedmultipath.conf:ImportantWhen adding vendor-specific multipath parameters to the
contents:of this file, ensure that you do not change the specified values of the following default multipath parameters:user_friendly_names,recheck_wwid,skip_kpartx, andfind_multipaths.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false mode: 384 user: name: root group: name: root contents: source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.serviceThe
contents:data represents the following literalmultipath.conffile contents:defaults { user_friendly_names no recheck_wwid yes skip_kpartx yes find_multipaths yes } blacklist { }
-
Save the
MachineConfigCR file, for example,99-worker-cinder-enable-multipathd.yaml. Apply the
MachineConfigCR file.$ oc apply -f 99-worker-cinder-enable-multipathd.yaml -n openstack
4.5.2. Configuring custom multipath parameters on data plane nodes Copy linkLink copied to clipboard!
Default multipath parameters are configured on all data plane nodes. You must add and configure any vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment, to create your custom multipath configuration file.
Ensure that you do not add the following default multipath parameters and overwrite their values: user_friendly_names, recheck_wwid, skip_kpartx, and find_multipaths.
You must modify the relevant OpenStackDataPlaneNodeSet custom resource (CR), to update the data plane node configuration to include your vendor-specific multipath parameters. You create an OpenStackDataPlaneDeployment CR that deploys and applies the modified OpenStackDataPlaneNodeSet CR to the data plane.
Prerequisites
- You have created your custom multipath configuration file that contains only the vendor-specific multipath parameters and your deployment-specific values.
Procedure
Create a secret to store your custom multipath configuration file:
$ oc create secret generic <secret_name> \ --from-file=<configuration_file_name>-
Replace
<secret_name>with the name that you want to assign to the secret, for example,custom-multipath-file. -
Replace
<configuration_file_name>with the name of the custom multipath configuration file that you created, for example,custom_multipath.conf.
-
Replace
-
Open the
OpenStackDataPlaneNodeSetCR file for the node set that you want to update, for example,openstack_data_plane.yaml. Add an
extraMountsattribute to theOpenStackDataPlaneNodeSetCR file to include your vendor-specific multipath parameters:spec: ... nodeTemplate: ... extraMounts: - extraVolType: <optional_volume_type_description> volumes: - name: <mounted_volume_name> secret: secretName: <secret_name> mounts: - name: <mounted_volume_name> mountPath: "/runner/multipath" readOnly: true-
Optional: Replace
<optional_volume_type_description>with a description of the type of the mounted volume, for example,multipath-config-file. Replace
<mounted_volume_name>with the name of the mounted volume, for example,custom-multipath.NoteDo not change the value of the
mountPath:parameter from"/runner/multipath".
-
Optional: Replace
-
Save the
OpenStackDataPlaneNodeSetCR file. Apply the updated
OpenStackDataPlaneNodeSetCR configuration:$ oc apply -f openstack_data_plane.yamlVerify that the data plane resource has been updated by confirming that the status is
SetupReady:$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10mWhen the status is
SetupReady, the command returns acondition metmessage, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Create a file on your workstation to define the
OpenStackDataPlaneDeploymentCR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>-
Replace
<node_set_deployment_name>with the name of theOpenStackDataPlaneDeploymentCR. The name must be unique, must consist of lower case alphanumeric characters,-(hyphen) or.(period), and must start and end with an alphanumeric character, for example,openstack_data_plane_deploy.
-
Replace
Add the
OpenStackDataPlaneNodeSetCR that you modified:spec: nodeSets: - <nodeSet_name>-
Save the
OpenStackDataPlaneDeploymentCR deployment file, for example,openstack_data_plane_deploy.yaml. Deploy the modified
OpenStackDataPlaneNodeSetCR:$ oc create -f openstack_data_plane_deploy.yaml -n openstackYou can view the Ansible logs while the deployment executes:
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10If the
oc logscommand returns an error similar to the following error, increase the--max-log-requestsvalue:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
Verification
Verify that the modified
OpenStackDataPlaneNodeSetCR is deployed:$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet ReadyFor information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information about troubleshooting the deployment, see Troubleshooting the data plane creation and deployment in Deploying Red Hat OpenStack Services on OpenShift.
4.6. Configuring initial defaults Copy linkLink copied to clipboard!
The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig section. Once deployed, these initial defaults are modified using the openstack client.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the Block Storage service global configuration.
The following example demonstrates a Block Storage service initial configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: customServiceConfig: | [DEFAULT] quota_volumes = 20 quota_snapshots = 15For a complete list of all initial default parameters, see Initial default parameters.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.6.1. Initial default parameters Copy linkLink copied to clipboard!
These initial default parameters should be configured when the service is first enabled.
| Parameter | Description |
|---|---|
|
|
Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is |
|
|
Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is |
|
|
Provides the maximum size of each volume in gigabytes. The default is |
|
|
Provides the number of volumes allowed for each project. The default value is |
|
|
Provides the number of snapshots allowed for each project. The default value is |
|
|
Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is |
|
|
Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the |
|
|
Provides the number backups allowed for each project. The default value is |
|
|
Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is |
4.7. Configuring the API service Copy linkLink copied to clipboard!
The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other OpenStack services. Red Hat OpenStack Services on OpenShift (RHOSO) supports Block Storage REST API version 3.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer.
The following example demonstrates a load balancer configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancerEdit the CR file and add the configuration for the number of API service replicas. Run the
cinderAPIservice in an Active-Active configuration with three replicas.The following example demonstrates configuring the
cinderAPIservice to use three replicas:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: replicas: 3Edit the CR file and configure
cinderAPIoptions. These options are configured in thecustomServiceConfigsection under thecinderAPIsection.The following example demonstrates configuring
cinderAPIservice options and enabling debugging on all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderAPI: customServiceConfig: | [DEFAULT] osapi_volume_workers = 3For a listing of commonly used
cinderAPIservice option parameters, see API service option parameters.- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.7.1. Block Storage API service option parameters Copy linkLink copied to clipboard!
API service option parameters are provided for the configuration of the cinderAPI portions of the Block Storage service.
| Parameter | Description |
|---|---|
|
|
Provides a value to determine if the API rate limit is enabled. The default is |
|
|
Provides a value to determine whether the logging level is set to |
|
|
Provides a value for the maximum number of items a collection resource returns in a single response. The default is |
|
| Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available. |
4.8. Configuring the Block Storage scheduler service component Copy linkLink copied to clipboard!
The Block Storage service (cinder) has a scheduler service (cinderScheduler) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations.
Use only a single instance of cinderScheduler for scheduling consistency and ease of troubleshooting. While cinderScheduler can be run with multiple instances, the service default replicas: 1 is the best practice.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the service down detection timeouts.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] report_interval = 20 service_down_time = 120-
report_interval: The number of seconds between Block Storage service components reporting an operational state in the form of a heartbeat through the database. The default is10. service_down_time: The maximum number of seconds since the last heartbeat from the component for it to be considered non-operational. The default is60.NoteConfigure these values at the
cinderlevel of the CR instead of thecinderSchedulerso that these values are applied to all components consistently.
-
Edit the CR file and add the configuration for the statistics reporting interval.
The following example demonstrates configuring these values at the
cinderlevel to apply them globally to all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120 backup_driver_stats_polling_interval = 120-
backend_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from the back end. The default is60. backup_driver_stats_polling_interval: The number of seconds between requests from the volume for usage statistics from backup service. The default is60.The following example demonstrates configuring these values at the
cinderVolumeandcinderBackuplevel to customize settings at the service level.apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderBackup: customServiceConfig: | [DEFAULT] backup_driver_stats_polling_interval = 120 < rest of the config > cinderVolumes: nfs: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120NoteThe generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends.
-
Perform any additional configuration necessary to customize the
cinderSchedulerservice.For more configuration options for the customization of the
cinderSchedulerservice, see Scheduler service parameters.- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.8.1. Scheduler service parameters Copy linkLink copied to clipboard!
Scheduler service parameters are provided for the configuration of the cinderScheduler portions of the Block Storage service
| Parameter | Description |
|---|---|
|
|
Provides a setting for the logging level. When this parameter is |
|
|
Provides a setting for the maximum number of attempts to schedule a volume. The default is |
|
|
Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is |
|
|
Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is |
|
|
Provides a setting for a handler to use for selecting the host or pool after weighing. The value |
The following is an explanation of the filter class names from the parameter table:
AvailabilityZoneFilter
- Filters out all back ends that do not meet the availability zone requirements of the requested volume.
CapacityFilter
- Selects only back ends with enough space to accommodate the volume.
CapabilitiesFilter
- Selects only back ends that can support any specified settings in the volume.
InstanceLocality
- Configures clusters to use volumes local to the same node.
4.9. Configuring the Block Storage volume service component Copy linkLink copied to clipboard!
The Block Storage service (cinder) has a volume service (cinderVolumes section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots.
This service requires access to the storage back end (storage) and storage management (storageMgmt) networks in the networkAttachments of the OpenStackControlPlane CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access.
Volume service configuration is performed in the cinderVolumes section with parameters set in the customServiceConfig, customServiceConfigSecrets, networkAttachments, replicas, and the nodeSelector sections.
The volume service cannot have multiple replicas.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for your back end.
The following example demonstrates the service configuration for a Red Hat Ceph Storage back end:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderVolumes: ceph: networkAttachments: - storage customServiceConfig: | [ceph] volume_backend_name = ceph volume_driver = cinder.volume.drivers.rbd.RBDDriver-
ceph: The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends. -
networkAttachments: The configuration area for the back end network connections. -
volume_backend_name: The name assigned to this back end. volume_driver: The driver used to connect to this back end.For a list of commonly used volume service parameters, see Volume service parameters.
-
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.9.1. Volume service parameters Copy linkLink copied to clipboard!
Volume service parameters are provided for the configuration of the cinderVolumes portions of the Block Storage service
| Parameter | Description |
|---|---|
|
|
Provides a setting for the availability zone of the back end. This is set in the |
|
| Provides a setting for the back end name for a given driver implementation. There is no default value. |
|
| Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value. |
|
|
Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a |
|
|
Provides a setting for a directory used for temporary storage during image conversion. The default value is |
|
|
Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is |
4.9.2. Block Storage service (cinder) back ends Copy linkLink copied to clipboard!
Each Block Storage service back end should have an individual configuration section in the cinderVolumes section. This ensures each back end runs in a dedicated pod. This approach has the following benefits:
- Increased isolation.
- Adding and removing back ends is fast and does not affect other running back ends.
- Configuration changes do not affect other running back ends.
- Automatically spreads the Volume pods into different nodes.
Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols. Storage protocol information should also be provided in individual vendor installation guides.
Configure each back end with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the best practice.
No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment.
All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog.
For more information about integrating and certifying vendor drivers, see Integrating partner content.
For information about Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a hyperconverged infrastructure environment.
For information about configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end.
Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver.
4.9.3. Multiple Block Storage service (cinder) back ends Copy linkLink copied to clipboard!
Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes configuration section. Each back end runs in an independent pod.
The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
cinderVolumes:
nfs:
networkAttachments:
- storage
customServiceConfigSecrets:
- cinder-volume-nfs-secrets
customServiceConfig: |
[nfs]
volume_backend_name=nfs
iSCSI:
networkAttachments:
- storage
- storageMgmt
customServiceConfig: |
[iscsi]
volume_backend_name=iscsi
4.10. Configuring back end availability zones Copy linkLink copied to clipboard!
Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling.
For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware.
Post-deployment, AZs are created by using the RESKEY:availability_zones volume type extra specification.
Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the AZ configuration.
The following example demonstrates an AZ configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage - storageMgmt customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs backend_availability_zone=zone1 iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi backend_availability_zone=zone2-
backend_availability_zone: The availability zone associated with the back end.
-
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.11. Configuring a generic NFS storage back end for volumes Copy linkLink copied to clipboard!
The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups.
- Limitations
- Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
- For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS.
RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the
customServiceConfigin the specific back-end configuration with the following parameters:nas_secure_file_operation=false nas_secure_file_permissions=false-
Do not configure the
nfs_mount_optionsoption. The default value is the best NFS option for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.
Procedure
Create a
SecretCR to store the volume connection information.The following is an example of a
SecretCR:apiVersion: v1 kind: Secret metadata: name: cinder-volume-nfs-secrets type: Opaque stringData: cinder-volume-nfs-secrets: | [nfs] nas_host=192.168.130.1 nas_share_path=/var/nfs/cinderwhere:
name-
Is the name used when including it in the
cinderVolumesback end configuration.
- Save the file.
Update the control plane:
$ oc apply -f <secret_file_name> -n openstack-
Replace
<secret_file_name>with the name of the file that contains yourSecretCR.
-
Replace
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the generic NFS back end.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage customServiceConfig: | [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false customServiceConfigSecrets: - cinder-volume-nfs-secrets-
The
storageMgmtnetwork is not listed because generic NFS does not have a management interface. -
cinder-volume-nfs-secret: The name from theSecretCR. - If you are configuring multiple generic NFS back ends, ensure that each back end is in an individual configuration section so that one pod is dedicated to each back end.
-
The
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.12. Configuring NFS storage for volume format conversion Copy linkLink copied to clipboard!
When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled.
Procedure
-
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. Edit the CR file and add the configuration for the directory for converting large Image service images.
The following example demonstrates how to configure this conversion directory:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: extraVol: - propagation: - CinderVolume volumes: - name: cinder-conversion nfs: path: <nfs_share_path> server: <nfs_server> mounts: - name: cinder-conversion mountPath: /var/lib/cinder/conversion ...Replace
<nfs_share_path>with the path to the conversion directory.NoteThe Block Storage volume service (cinder-volume) runs as the
cinderuser. Thecinderuser requires write permission for<nfs_share_path>. You can configure this by running the following command on the NFS server:$ chown 42407:42407 <nfs_share_path>.-
Replace
<nfs_server>with the IP address of the NFS server that hosts the conversion directory.
NoteThis example demonstrates how to create a common conversion directory that all the volume service pods use.
You can also define a conversion directory for each volume service pod:
-
Define each conversion directory by using an
extraMountssection, as demonstrated above, in thecindersection of theOpenStackControlPlaneCR file. -
Set the
propagationvalue to the name of the specific Volume section instead ofCinderVolume.
- Save the file.
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
4.13. Configuring automatic database cleanup Copy linkLink copied to clipboard!
The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources.
These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs.
Procedure
-
Open your
openstack_control_plane.yamlfile to edit theOpenStackControlPlaneCR. Add the
dbPurgeparameter to thecindertemplate to configure database cleanup depending on the service you want to configure.The following is an example of using the
dbPurgeparameter to configure the Block Storage service:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: dbPurge: age: 20 schedule: 1 0 * * 0-
age: The number of days a record has been marked for deletion before it is purged. The default value is30. The minimum value is1. -
schedule: When to run the job in a crontab format. The default value is1 0 * * *. This default value is equivalent to00:01daily.
-
Update the control plane:
$ oc apply -f openstack_control_plane.yaml
4.14. Preserving backup jobs during Block Storage service updates Copy linkLink copied to clipboard!
The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs.
If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob field in your OpenStackControlPlane CR to stop the automatic removal of jobs and preserve them.
Example:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
preserveJobs: true
4.15. Resolving hostname conflicts in backup services Copy linkLink copied to clipboard!
Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN.
Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead.
These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0 * For backups: cinder-backup-<replica-number>
If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames field.
When you set the uniquePodNames field to true, a short hash is added to the pod names, which addresses hostname conflicts.
Example:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
uniquePodNames: true
4.16. Using other container images Copy linkLink copied to clipboard!
Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed by using a container image for a specific release and version. Sometimes, a deployment requires a container image other than the one produced for that release and version.
The most common reasons for using a container image that is not for a specific release and version are:
- Deploying a hotfix.
- Using a certified, vendor-provided container image.
The container images used by the installer are controlled through the OpenStackVersion CR. An OpenStackVersion CR is automatically created by the openstack operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane CR but after the openstack operator is installed. This allows for the container image for any service and component to be individually designated.
The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI, cinderScheduler, and cinderBackup pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes.
The following example demonstrates a OpenStackControlPlane configuration with two back ends; one called ceph and one called custom-fc. The custom-fc backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
cinderVolumes:
ceph:
networkAttachments:
- storage
< . . . >
custom-fc:
networkAttachments:
- storage
The following example demonstrates what our OpenStackVersion CR might look like in order to set up the container images properly.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
name: openstack
spec:
customContainerImages:
cinderAPIImages: <custom-api-image>
cinderBackupImages: <custom-backup-image>
cinderSchedulerImages: <custom-scheduler-image>
cinderVolumeImages:
custom-fc: <vendor-volume-volume-image>
-
Replace
<custom-api-image>with the name of the API service image to use. -
Replace
<custom-backup-image>with the name of the Backup service image to use. -
Replace
<custom-scheduler-image>with the name of the Scheduler service image to use. -
Replace
<vendor-volume-volume-image>with the name of the certified, vendor-provided image to use.
The name attribute in your OpenStackVersion CR must match the same attribute in your OpenStackControlPlane CR.
Chapter 5. Configuring the Block Storage backup service Copy linkLink copied to clipboard!
You can use the optional backup service of the Block Storage service (cinder) to create and restore full or incremental backups of Block Storage volumes. Configure the backup service in the cinderBackup section of your OpenStackControlPlane CR.
5.1. Prerequisites Copy linkLink copied to clipboard!
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges. - You have enabled the backup service for the Block Storage service in your OpenStack Control Plane.
5.2. Storage back ends for Block Storage volume backups Copy linkLink copied to clipboard!
You can configure different storage back ends for Block Storage backups, including Red Hat Ceph Storage RBD, the Object Storage service (swift), NFS, and S3.
Red Hat Ceph Storage RBD is the default back end when you use Red Hat Ceph Storage. For more information, see Configuring the control plane to use the Red Hat Ceph Storage cluster.
For information about other back end options for backups, see OSP18 Cinder Alternative Storage.
You can use the backup service to back up volumes that are on any back end that the Block Storage service (cinder) supports, regardless of which back end you choose to use for backups. You can only configure one back end for backups, whereas you can configure multiple back ends for volumes.
Back ends for backups do not have transport protocol requirements for the RHOCP node. However, the backup pods need to connect to the volumes, and the back ends for volumes have transport protocol requirements.
5.3. Setting the number of replicas for backups Copy linkLink copied to clipboard!
You can run multiple instances of the Block Storage backup component in active-active mode by setting replicas to a value greater than 1. The default value is 0.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to thecindertemplate to set the number of replicas for thecinderBackupparameter:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: … cinder: template: cinderBackup: | replicas: <number_of_replicas> ...-
Replace
<number_of_replicas>with a value greater than1.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
5.4. Block Storage volume backup performance considerations Copy linkLink copied to clipboard!
Some features of the Block Storage backup service like incremental backups, the creation of backups from snapshots, and data compression can reduce the performance of backup operations.
By only capturing the periodic changes to volumes, incremental backup operations can minimize resource usage. However, incremental backup operations have a lower performance than full backup operations. When you create an incremental backup, all of the data in the volume must first be read and compared with the data in both the full backup and each subsequent incremental backup.
Some back ends for volumes support the creation of a backup from a snapshot by directly attaching the snapshot to the backup host, which is faster than cloning the snapshot into a volume. If the back end you use for volumes does not support this feature, you can create a volume from a snapshot and use the volume as backup. However, the extra step of creating the volume from a snapshot can affect the performance of the backup operation.
You can configure the Block Storage backup service to enable or disable data compression of the storage back end for your backups. If you enable data compression, backup operations require additional CPU power, but they use less network bandwidth and storage space overall.
You cannot use data compression with a Red Hat Ceph Storage back end.
5.5. Setting configuration options for volume backups Copy linkLink copied to clipboard!
The cinderBackup parameter inherits the configuration from the top level customServiceConfig section of the cinder template in your OpenStackControlPlane CR. However, the cinderBackup parameter also has its own customServiceConfig section.
The following table describes configuration options that apply to all back-end drivers.
| Option | Description | Value type | Default value |
|---|---|---|---|
|
|
When set to | Boolean |
|
|
|
Offload pending backup delete during backup service startup. If set to | Boolean |
|
|
| Availability zone of the backup service. | String |
|
|
| Number of processes to launch in the backup pod. Improves performance with concurrent backups. | Integer |
|
|
| Maximum number of concurrent memory, and possibly CPU, heavy operations (backup and restore) that can be executed on each pod. The number limits all workers within a pod but not across pods. Value of 0 means unlimited. | Integer |
|
|
| Size of the native threads pool used for backup data-related operations. Most backup drivers rely heavily on this option, and you can increase the value for specific drivers that do not rely on it. | Integer |
|
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to thecindertemplate to set configuration options. In this example, you enable debug logs, double the number of processes, and increase the maximum number of operations per pod to 20.Example:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: … cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderBackup: customServiceConfig: | [DEFAULT] backup_workers = 2 backup_max_operations = 20 ...Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
5.6. Enabling data compression for volume backups Copy linkLink copied to clipboard!
Select a compression algorithm for your backups to reduce storage space and network bandwidth usage. Backups use zlib compression by default, but you can change algorithms or disable compression.
Data compression requires additional CPU power but uses less network bandwidth and storage space.
You can change the data compression algorithm of your backups or disable data compression by using the backup_compression_algorithm parameter in your OpenStackControlPlane CR.
The following options are available for data compression.
| Option | Description |
|
| Do not use compression. |
|
| Use the Deflate compression algorithm. |
|
| Use Burrows-Wheeler transform compression. |
|
| Use the Zstandard compression algorithm. |
You cannot specify the data compression algorithm for the Red Hat Ceph Storage back end driver.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameter to thecindertemplate to enable data compression. In this example, you enable data compression with an Object Storage service (swift) back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver backup_compression_algorithm = zstd networkAttachments: - storageUpdate the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
5.7. Configuring Ceph RBD storage for Block Storage backups Copy linkLink copied to clipboard!
You can configure Red Hat Ceph Storage RADOS Block Device (RBD) as the storage back end for your Block Storage backups. RBD provides efficient incremental backups when combined with Ceph RBD volumes.
For more information about Ceph RBD, see Configuring the control plane to use the Red Hat Ceph Storage cluster.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to thecindertemplate to configure Ceph RBD as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.CephBackupDriver backup_ceph_pool = backups backup_ceph_user = openstack networkAttachments: - storage replicas: 1Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
5.8. Configuring Object Storage for Block Storage volume backups Copy linkLink copied to clipboard!
You can configure the Object Storage service (swift) as the storage back end for your Block Storage backups. The Object Storage service provides scalable object storage with customizable containers for backup data.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
- Verify that the Object Storage service is active in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
The default container for Object Storage service back ends is volumebackups. You can change the default container by using the backup_swift_container configuration option.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to thecindertemplate to configure the Object Storage service as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver networkAttachments: - storage replicas: 1Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
5.9. Configuring NFS storage for Block Storage volume backups Copy linkLink copied to clipboard!
You can configure NFS as the storage back end for your Block Storage backups. NFS provides network-accessible file storage with flexible mount options for backup data.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Create a
secretCR file, for example,cinder-backup-nfs-secrets.yaml, and add the following configuration for your NFS share:apiVersion: v1 kind: Secret metadata: labels: service: cinder component: cinder-backup name: cinder-backup-nfs-secrets type: Opaque stringData: nfs-secrets.conf: | [DEFAULT] backup_share = <192.168.1.2:/Backups> backup_mount_options = <optional>-
Replace
<192.168.1.2:/Backups>with the IP address of your NFS share. -
Replace
<optional>with the mount options for your NFS share.
-
Replace
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to thecindertemplate to add thesecretfor the NFS share and configure NFS as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.NFSBackupDriver customServiceConfigSecrets: - cinder-backup-nfs-secrets networkAttachments: - storage replicas: 1Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
5.10. Configuring S3 storage for Block Storage volume backups Copy linkLink copied to clipboard!
You can configure the Block Storage service (cinder) backup service with S3 as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to thecindertemplate to configure S3 as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.s3.S3BackupDriver backup_s3_endpoint_url = <user supplied> backup_s3_store_access_key = <user supplied> backup_s3_store_secret_key = <user supplied> backup_s3_store_bucket = volumebackups backup_s3_ca_cert_file = /etc/pki/tls/certs/ca-bundle.crt networkAttachments: - storageUpdate the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
5.11. Block Storage volume backup metadata and export options Copy linkLink copied to clipboard!
When you create a backup of a Block Storage volume, the metadata for this backup is stored in the Block Storage service database. The Block Storage backup service uses this metadata when it restores the volume from the backup.
To ensure that a backup survives a catastrophic loss of the Block Storage service database, you can manually export and store the metadata of this backup. After a catastrophic database loss, you need to create a new Block Storage database and then manually re-import this backup metadata into it.
Chapter 6. Configuring the Image service (glance) Copy linkLink copied to clipboard!
The Image service (glance) provides discovery, registration, and delivery services for disk and server images. Use stored images as templates to commission servers. Supported back ends include RADOS Block Device (RBD), the Block Storage service (cinder), Object Storage service (swift), S3, and NFS.
You can configure the following back ends as stores for the Image service:
- RBD is the default back end when you use Red Hat Ceph Storage.
- RBD multistore. You can use multiple stores only with distributed edge architecture or distributed zones so that you can have an image pool at every edge site or zone.
- Block Storage service.
- Block Storage service multistore. You can use multiple stores only with distributed zones so that you can have an image pool in every zone.
- Object Storage service.
- S3.
- NFS.
For more information about Red Hat Ceph Storage, distributed edge architecture, and distributed zones, see the following documentation:
6.1. Prerequisites Copy linkLink copied to clipboard!
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
6.2. Configuring Block Storage as an Image service back end Copy linkLink copied to clipboard!
You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
-
Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API (
glanceAPI) is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to theglancetemplate to configure the Block Storage service as the back end for the Image service:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = <backend_name>:cinder [glance_store] default_backend = <backend_name> [<backend_name>] description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true [oslo_concurrency] lock_path = /var/lib/glance/tmp ...-
Set
replicasto3for high availability across APIs. -
Replace
<backend_name>with the name of the defaultcinderback end, for examplenfs_store. -
The
/var/lib/glance/tmpdirectory is where lock files used byoslo.concurrencyare stored to coordinate concurrent access to shared resources.
-
Set
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
Additional resources
6.2.1. Enabling multiple instances from volume-backed images Copy linkLink copied to clipboard!
When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.
When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.
By default, only the Block Storage project administrator can create volume types.
Procedure
Access the remote shell for the
openstackclientpod from your workstation:$ oc rsh -n openstack openstackclientCreate a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:
$ openstack volume type create glance-multiattach $ openstack volume type set --property multiattach="<is> True" glance-multiattachIf you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the
volume_backend_nameproperty to this volume type. You might need to ask your Block Storage administrator for the correctvolume_backend_namefor your multi-attach volume type. For this example, we are usingiscsias the back-end name.$ openstack volume type set glance-multiattach --property volume_backend_name=iscsiExit the
openstackclientpod:$ exitOpen your
OpenStackControlPlaneCR file,openstack_control_plane.yaml. In theglancetemplate, add the following parameter to the end of thecustomServiceConfig,[<backend_name>]section to configure the Image service to use the Block Storage multi-attach volume type:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: ... customServiceConfig: | ... [<backend_name>] ... cinder_volume_type = glance-multiattach ...-
Replace
<backend_name>with the name of the default back end.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
Additional resources
6.2.2. Parameters for configuring the Block Storage back end Copy linkLink copied to clipboard!
You can add the following parameters to the end of the customServiceConfig, [<backend_name>] section of the glance template in your OpenStackControlPlane CR file.
| Parameter = Default value | Type | Description of use |
|---|---|---|
|
| boolean value |
Set to |
|
| boolean value |
Set to |
|
| string value | Specify a string representing the absolute path of the mount point, the directory where the Image service mounts the NFS share. Note This parameter is only applicable when using an NFS Block Storage back end for the Image service. |
|
| boolean value |
Set to
The Block Storage service creates an initial 1 GB volume and extends the volume size in 1 GB increments until it contains the data of the entire image. When this parameter is either not added or set to Note This parameter requires your Block Storage back end to support the extension of attached (in-use) volumes. See your back-end driver documentation for information on which features are supported. |
|
| string value | Specify the name of the Block Storage volume type that can be optimized for creating volumes for images. For example, you can create a volume type that enables the creation of multiple instances or volumes from a volume-backed image. For more information, see Creating a multi-attach volume type. When this parameter is not used, volumes are created by using the default Block Storage volume type. |
6.3. Configuring Object Storage as an Image service back end Copy linkLink copied to clipboard!
You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to theglancetemplate to configure the Object Storage service as the back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = <backend_name>:swift [glance_store] default_backend = <backend_name> [<backend_name>] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...-
Set
replicasto3for high availability across APIs. -
Replace
<backend_name>with the name of the default back end.
-
Set
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
6.4. Configuring an S3 back end Copy linkLink copied to clipboard!
To configure the Image service (glance) with S3 as the storage back end, you require the following details:
- S3 access key
- S3 secret key
- S3 endpoint
For security, these details are stored in a Kubernetes secret.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
-
Create a configuration file, for example,
glance-s3.conf, where you can store the S3 configuration details. Generate the secret and access keys for your S3 storage.
If your S3 storage is provisioned by the Ceph Object Gateway (RGW), run the following command to generate the secret and access keys:
$ radosgw-admin user create --uid="<user_1>" \ --display-name="<Jane Doe>"-
Replace
<user_1>with the user ID. -
Replace
<Jane Doe>with a display name for the user.
-
Replace
If your S3 storage is provisioned by the Object Storage service (swift), run the following command to generate the secret and access keys:
$ openstackclient openstack credential create --type ec2 \ --project admin admin \ '{"access": "<access_key>", "secret": "<secret_key>"}'-
Replace
<access_key>with the user ID. -
Replace
<secret_key> with a display name for the user.
-
Replace
Add the S3 configuration details to your
glance-s3.confconfiguration file:[default_backend] s3_store_host = <_s3_endpoint_> s3_store_access_key = <_s3_access_key_> s3_store_secret_key = <_s3_secret_key_> s3_store_bucket = <_s3_bucket_>-
Replace
<_s3_endpoint_>with the host where the S3 server is listening. This option can contain a DNS name, for example,_s3.amazonaws.com_, or an IP address. -
Replace
<_s3_access_key_>and<_s3_secret_key_>with the data generated by the S3 back end. -
Replace
<_s3_bucket_>with the bucket name where you want to store images in the S3 back end. If you sets3_store_create_bucket_on_puttoTruein yourOpenStackControlPlaneCR file, the bucket name is created automatically, even if the bucket does not already exist.
-
Replace
Create a secret from the
glance-s3.conffile:$ oc create secret generic glances3 \ --from-file s3glance.confOpen your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to theglancetemplate to configure S3 as the back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: ... customServiceConfig: | [DEFAULT] enabled_backends = <backend_name>:s3 [glance_store] default_backend = <backend_name> [<backend_name>] s3_store_create_bucket_on_put = True s3_store_bucket_url_format = "path" s3_store_cacert = "/etc/pki/tls/certs/ca-bundle.crt s3_store_large_object_size = 0 glanceAPIs: default: customServiceConfigSecrets: - glances3 ...-
Replace
<backend_name>with the name of the default back end. -
Optional: If your S3 storage is accessed by HTTPS, you must set the
s3_store_cacertfield and point it to theca-bundle.crtpath. The OpenStack control plane is deployed by default with TLS enabled, and a CA certificate is mounted to the pod in/etc/pki/tls/certs/ca-bundle.crt. -
Optional: Set
s3_store_large_object_sizeto0to force multipart upload when you create an image in the S3 back end from a Block Storage service (cinder) volume.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
6.5. Configuring an NFS back end Copy linkLink copied to clipboard!
You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share.
If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk:
- Use a reliable production-grade NFS back end.
-
Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a
NetworkAttachmentDefinitioncustom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server. - Set export permissions. Write permissions must be present in the shared file system that you use as a store.
- Limitations
In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways:
- Set server-side mount options.
-
Use
/etc/nfsmount.conf. - Mount NFS volumes by using PersistentVolumes, which have mount options.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add theextraMountsparameter in thespecsection to add the export path and IP address of the NFS share. The path is mapped to/var/lib/glance/images, where the Image service API (glanceAPI) stores and retrieves images:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack ... spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <nfs_export_path> server: <nfs_ip_address> name: r1 region: r1 ...-
Replace
<nfs_export_path>with the export path of your NFS share. -
Replace
<nfs_ip_address>with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service.
-
Replace
Add the following parameters to the
glancetemplate to configure NFS as the back end:... spec: extraMounts: ... glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = <backend_name>:file [glance_store] default_backend = <backend_name> [<backend_name>] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack ...-
Set
replicasto3for high availability across APIs. Replace
<backend_name>with the name of the default back end.NoteWhen you configure an NFS back end, you must set the
typetosingle. By default, the Image service has asplitdeployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. Thesplitdeployment type is invalid for afileback end because different pods access the same file share.
-
Set
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
6.6. Configuring multistore for a single Image service API instance Copy linkLink copied to clipboard!
You can configure the Image service (glance) with multiple storage back ends.
To configure multiple back ends for a single Image service API (glanceAPI) instance, you set the enabled_backends parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid:
-
file -
http -
rbd -
swift -
cinder -
s3
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back ends, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the parameters to theglancetemplate to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store:... spec: glance: template: customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift ...Specify the back end to use as the default back end. In the following example, the default back end is
ceph-1:... customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift [glance_store] default_backend = ceph-1 ...Add the configuration for each back end type you want to use:
Add the configuration for the first Ceph RBD store,
ceph-0:... customServiceConfig: | [DEFAULT] ... [ceph-0] rbd_store_ceph_conf = /etc/ceph/ceph-0.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack ...Add the configuration for the second Ceph RBD store,
ceph-1:... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] rbd_store_ceph_conf = /etc/ceph/ceph-1.conf store_description = "RBD backend 1" rbd_store_pool = images rbd_store_user = openstack ...Add the configuration for the Object Storage service store,
swift-0:... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] ... [swift-0] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
6.7. Configuring multiple Image service API instances Copy linkLink copied to clipboard!
You can deploy multiple Image service API (glanceAPI) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI instances, they are orchestrated by the same glance-operator, but you can connect them to a single back end or to different back ends.
Multiple glanceAPI instances inherit the same configuration from the main customServiceConfig parameter in your OpenStackControlPlane CR file. You use the extraMounts parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters.
You can also deploy multiple glanceAPI instances in an availability zone (AZ) to serve different workloads in that AZ.
You can only register one glanceAPI instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint parameter in your OpenStackControlPlane CR file.
For information about adding and decommissioning glanceAPIs, see Adding an Image service API and Decommisioning an Image service API in Customizing persistent storage.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add theglanceAPIsparameter to theglancetemplate to configure multipleglanceAPIinstances. In the following example, you create threeglanceAPIinstances that are namedapi0,api1, andapi2:... spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = <backend_name>:rbd [glance_store] default_backend = <backend_name> [<backend_name>] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseUser: glance keystoneEndpoint: api0 glanceAPIs: api0: replicas: 1 api1: replicas: 1 api2: replicas: 1 ...-
Replace
<backend_name>with the name of the default back end. -
api0is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations. -
api1andapi2are not default endpoints, but they are active APIs that users can use for image uploads by specifying the--os-image-urlparameter when they upload an image. -
You can update the
keystoneEndpointparameter to change the default endpoint in the Keystone catalog.
-
Replace
Add the
extraMountsparameter to connect the threeglanceAPIinstances to a different back end. In the following example, you connectapi0,api1, andapi2to three different Ceph Storage clusters that are namedceph0,ceph1, andceph2:spec: glance: template: customServiceConfig: | [DEFAULT] ... extraMounts: - name: api0 region: r1 extraVol: - propagation: - api0 volumes: - name: ceph0 secret: secretName: <secret_name> mounts: - name: ceph0 mountPath: "/etc/ceph" readOnly: true - name: api1 region: r1 extraVol: - propagation: - api1 volumes: - name: ceph1 secret: secretName: <secret_name> mounts: - name: ceph1 mountPath: "/etc/ceph" readOnly: true - name: api2 region: r1 extraVol: - propagation: - api2 volumes: - name: ceph2 secret: secretName: <secret_name> mounts: - name: ceph2 mountPath: "/etc/ceph" readOnly: true ...-
Replace
<secret_name>with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specificglanceAPI, for example,ceph-conf-files-0for theceph0cluster.
-
Replace
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
6.8. Split and single Image service API layouts Copy linkLink copied to clipboard!
By default, the Image service (glance) has a split deployment type:
- An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone)
- An internal API service, which is accessible only through the internal endpoint for the Identity service
The split deployment type is invalid for an NFS or file back end because different pods access the same file share. When you configure an NFS or file back end, you must set the type to single in your OpenStackControlPlane CR.
Split layout example: In the following example of a split layout type in an edge deployment, two glanceAPI instances are deployed in an availability zone (AZ) to serve different workloads in that AZ.
...
spec:
glance:
template:
customServiceConfig: |
[DEFAULT]
...
keystoneEndpoint: api0
glanceAPIs:
api0:
customServiceConfig: |
[DEFAULT]
enabled_backends = <backend_name>:rbd
replicas: 1
type: split
api1:
customServiceConfig: |
[DEFAULT]
enabled_backends = <backend_name>:swift
replicas: 1
type: split
...
-
Replace
<backend_name>with the name of the default back end.
Single layout example: In the following example of a single layout type in an NFS back-end configuration, different pods access the same file share:
...
spec:
extraMounts:
...
glance:
template:
glanceAPIs:
default:
type: single
replicas: 3 # Configure back end; set to 3 when deploying service
...
customServiceConfig: |
[DEFAULT]
enabled_backends = <backend_name>:file
[glance_store]
default_backend = <backend_name>
[<backend_name>]
filesystem_store_datadir = /var/lib/glance/images
databaseInstance: openstack
glanceAPIs:
...
-
Set
replicasto3for high availability across APIs. -
Replace
<backend_name>with the name of the default back end.
6.9. Configuring multistore with edge architecture Copy linkLink copied to clipboard!
When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites.
The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores.
With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see "Ceph block device layering" in the Red Hat Ceph Storage Block Device Guide:
When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch.
Refer to the following requirements to use images with edge sites:
- A copy of each image must exist in the Image service at the central location.
- You must copy images from an edge site to the central location before you can copy them to other edge sites.
- You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage.
- RBD must be the storage driver for the Image, Compute, and Block Storage services.
For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture.
Chapter 7. Configuring the Object Storage service (swift) Copy linkLink copied to clipboard!
Configure the Object Storage service (swift) to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.
OpenShift deployments are limited to one PV per node. However, the Object Storage service requires multiple PVs. To maximize availability and data durability, you create these PVs on different nodes, and only use one PV per node. External data plane nodes offer more flexibility for larger deployments with multiple disks per node.
For information about configuring the Object Storage service as an endpoint for the Red Hat Ceph Storage Object Gateway (RGW), see Configuring an external Ceph Object Gateway back end.
7.1. Prerequisites Copy linkLink copied to clipboard!
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
7.2. Deploying the Object Storage service on OpenShift nodes by using PersistentVolumes Copy linkLink copied to clipboard!
You use at least two swiftProxy replicas and three swiftStorage replicas in a default Object Storage service (swift) deployment. You can increase these values to distribute storage across more nodes and disks.
The ringReplicas value defines the number of object copies in the cluster. For example, if you set ringReplicas: 3 and swiftStorage/replicas: 5, every object is stored on 3 different PersistentVolumes (PVs), and there are 5 PVs in total.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to theswifttemplate:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: ringReplicas: 3 swiftStorage: replicas: 3 storageClass: <swift-storage> storageRequest: 100Gi ...-
Increase the
swiftProxy/replicas:value to distribute proxy instances across more nodes. -
Replace the
ringReplicas:value to define the number of object copies you want in your cluster. -
Increase the
swiftStorage/replicas:value to define the number of PVs in your cluster. -
Replace
<swift-storage>with the name of the storage class you want the Object Storage service to use.
-
Increase the
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
7.3. Deploying the Object Storage service on external data plane nodes Copy linkLink copied to clipboard!
If you operate large clusters with a lot of storage in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you can deploy the Object Storage service (swift) on external data plane nodes. With this configuration, the Object Storage proxy service continues to run on on the control plane and the Object Storage services run on the data plane nodes.
If you do not want to use persistent volumes for data storage, set swiftStorage replicas to 0 in the OpenStackControlPlane CR. When initially creating the OpenStackControlPlane CR, you must also set swiftProxy replicas to 0. This is necessary because the proxies for the Object Storage service require properly built rings with at least the configured number of replica devices to start. Once the data plane is deployed, you can then scale the swiftProxy replicas to the number you want.
To deploy and run the Object Storage services on data plane nodes, first you enable DNS forwarding to resolve data plane host names in the control plane pods, and then you create an OpenStackDataPlaneNodeSet CR with the following properties:
-
The
swiftservice - A list of disks to be used for Object Storage service storage
Procedure
Enable DNS forwarding to resolve data plane hostnames in the control plane pods.
Obtain the
clusterIPof the resolver:$ oc get svc dnsmasq-dns -o jsonpath=`{.spec.clusterIP}`Update the default DNS entry to add the
clusterIPof the resolver:apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: swift zones: - storage.example.com forwardPlugin: policy: Random upstreams: - <clusterIP>-
Replace
<clusterIP>with theclusterIPof the resolver.
-
Replace
Enable the
swiftstorage service on the data plane nodes by adding theswiftservice to the end of the list of services for theNodeSetin yourOpenStackDataPlaneNodeSetCR. The service runs the playbooks that are required to configure the Object Storage services:Example:
services: - repo-setup - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - swiftDefine disks to be used by the Object Storage service on data plane nodes.
When you define disks, you can do the following:
-
Define the disks in the global
nodeTemplatesection in yourOpenStackDataPlaneNodeSetCR to use the same type of disks for all nodes. -
Define disks on a per-node basis in the
nodessection of yourOpenStackDataPlaneNodeSetCR. - Assign disks to a specific region or zone.
- Enable ring management to distribute replicas.
-
Define the disks in the global
You must specify a weight for each disk. If you do not have custom weights in your existing rings, you can set the weight to the GiB capacity of the disk.
The following example shows the
OpenStackDataPlaneNodeSetCR for a data plane with three storage nodes. Each node is configured to use two disks in thenodeTemplatesection. The first nodeedpm-swift-0is configured to use a third disk in thenodessection:Example:
- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam namespace: openstack spec: ... networkAttachments: - ctlplane - storage nodeTemplate: ansible: ansibleVars: edpm_swift_disks: - device: /dev/vdb path: /srv/node/vdb region: 0 weight: 4000 zone: 0 - device: /dev/vdc path: /srv/node/vdc region: 0 weight: 4000 zone: 0 nodes: edpm-swift-0: ansible: ansibleVars: edpm_swift_disks: - device: /dev/vdd path: /srv/node/vdd weight: 1000 hostName: edpm-swift-0 networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-swift-1: hostName: edpm-swift-1 networks: - defaultRoute: true fixedIP: 192.168.122.101 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-swift-2: hostName: edpm-swift-2 networks: - defaultRoute: true fixedIP: 192.168.122.102 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ... services: - repo-setup - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - swift
7.4. Object Storage rings Copy linkLink copied to clipboard!
The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.
Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.
The Object Storage service has three rings to store the following types of data:
- Account information
- Containers, to facilitate organizing objects under an account
- Object replicas
7.5. Ring partition power Copy linkLink copied to clipboard!
The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.
In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat Support.
When you use separate nodes for the Object Storage service (swift), use a higher partition power value.
The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.
7.6. Increasing Object Storage ring partition power values Copy linkLink copied to clipboard!
You can only change the partition power parameter for new containers and their objects, so you must set this value before initial deployment.
The default partition power value is 10. Refer to the following table to select an appropriate partition power if you use three replicas:
| Partition Power | Maximum number of disks |
| 10 | ~ 35 |
| 11 | ~ 75 |
| 12 | ~ 150 |
| 13 | ~ 250 |
| 14 | ~ 500 |
Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and change the value forpartPowerunder theswiftRingparameter in theswifttemplate:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: partPower: 12 ringReplicas: 3 ...Replace
<12>with the value you want to set for partition power.TipYou can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.