Configuring persistent storage
Configuring storage services for Red Hat OpenStack Services on OpenShift
Abstract
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Tell us how we can make it better.
Providing documentation feedback in Jira
Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback.
To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com.
- Click the following link to open a Create Issue page: Create Issue
- Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form.
- Click Create.
Chapter 1. Configuring persistent storage
When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), you can configure your deployment to use Red Hat Ceph Storage as the back end for storage and you can configure RHOSO storage services for block, image, object, and file storage.
Red Hat OpenStack Services on OpenShift (RHOSO) supports integration with Red Hat Ceph Storage 8 with the following known issue:
- RHCEPH-10845 - [BZ#2351825] RGW tempest failures with RHCS 8 and RHOSO 18
Due to this known issue, the Red Hat Ceph Storage Object Gateway (RGW) is not supported for use with Red Hat Ceph Storage 8. For more information about this known issue, consult the provided link before attempting to integrate with Red Hat Ceph Storage 8.
You can integrate an external Red Hat Ceph Storage cluster with the Compute service (nova) and a combination of one or more RHOSO storage services, or you can create a hyperconverged infrastructure (HCI) environment. RHOSO supports Red Hat Ceph Storage 7 and 8. For information about creating a hyperconverged infrastructure (HCI) environment, see Deploying a hyperconverged infrastructure environment.
Red Hat OpenShift Data Foundation (ODF) can be used in external mode to integrate with Red Hat Ceph Storage. The use of ODF in internal mode is not supported. For more information on deploying ODF in external mode, see Deploying OpenShift Data Foundation in external mode.
RHOSO recognizes two types of storage - ephemeral and persistent:
- Ephemeral storage is associated with a specific Compute instance. When that instance is terminated, so is the associated ephemeral storage. This type of storage is useful for runtime requirements, such as storing the operating system of an instance.
- Persistent storage is designed to survive (persist) independent of any running instance. This storage is used for any data that needs to be reused, either by different instances or beyond the life of a specific instance.
RHOSO storage services correspond with the following persistent storage types:
- Block Storage service (cinder): Volumes
- Image service (glance): Images
- Object Storage service (swift): Objects
- Shared File Systems service (manila): Shares
All persistent storage services store data in a storage back end. Red Hat Ceph Storage can serve as a back end for all four services, and the features and functionality of OpenStack services are optimized when you use Red Hat Ceph Storage.
Storage solutions
RHOSO supports the following storage solutions:
- Configure the Block Storage service with a Ceph RBD back end, iSCSI, FC, or NVMe-TCP storage protocols, or a generic NFS back end.
- Configure the Image service with a Ceph RBD, Block Storage, Object Storage, or NFS back end.
- Configure the Object Storage service to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.
- Configure the Shared File Systems service with a native CephFS, Ceph-NFS, or alternative back end, such as NetApp or Pure Storage.
For information about planning the storage solution and related requirements for your RHOSO deployment, for example, networking and security, see Planning storage and shared file systems in Planning your deployment.
To promote the use of best practices, Red Hat has a certification process for OpenStack back ends. For improved supportability and interoperability, ensure that your storage back end is certified for RHOSO. You can check certification status in the Red Hat Ecosystem Catalog. Ceph RBD is certified as a back end in all RHOSO releases.
Red Hat OpenStack Services on OpenShift (RHOSO) supports external deployments of Red Hat Ceph Storage 7 and 8. Configuration examples that reference Red Hat Ceph Storage use Release 7 information. If you are using Red Hat Ceph Storage 8, adjust the configuration examples accordingly.
Chapter 2. Mounting external files to provide configuration data
Some deployment scenarios require access to external data for configuration or authentication purposes. RHOSO provides the extraMounts
parameter to allow access to this external information. This parameter mounts the designated external file for use by the RHOSO deployment. Deployment scenarios that require external information of this type can include:
-
A component needs deployment-specific configuration and credential files for the storage back-end to exist in a specific location in the filesystem. For example, the Red Hat Ceph Storage cluster configuration and keyring files are required by the Block Storage (cinder), Image (glance) and Compute (nova) services. These configuration and keyring files must be distributed to these services using the
extraMounts
parameter. -
A node requires access to an external NFS share to use as a temporary image storage location when the allocated node disk space is fully consumed. You use the
extraMounts
parameter to configure this access. For example, the Block Storage service using an external share to perform conversion. -
A storage back-end drive must run on a persistent filesystem to preserve stored data between reboots. You must use the
extraMounts
parameter to configure this runtime location.
The extraMounts
parameter can be defined at the following levels:
-
Service - A Red Hat OpenStack Services on OpenShift (RHOSO) service such as
Glance
,Cinder
, orManila
. -
Component - A component of a service such as
GlanceAPI
,CinderAPI
,CinderScheduler
,ManilaShare
,CinderBackup
. -
Instance - An individual instance of a particular component. For example, your deployment could have two instances of the component
ManilaShare
calledshare1
andshare2
. An Instance level propagation represents the Pod associated to an instance that is part of the same Component type.
The propagation
field is used to describe how the definition is applied. If the propagation
field is not used, definitions propagate to every level below the level at which it is defined:
- Service level definitions propagate to Component and Instance levels.
- Component level definitions propagate to the Instance level.
The following is the general structure of an extraMounts
definition:
extraMounts: - name: <extramount-name> region: <openstack-region> extraVol: - propagation: - <location> extraVolType: <Ceph | Nfs | Undefined> volumes: - <pod-volume-structure> mounts: - <pod-mount-structure>
extraMounts:
- name: <extramount-name>
region: <openstack-region>
extraVol:
- propagation:
- <location>
extraVolType: <Ceph | Nfs | Undefined>
volumes:
- <pod-volume-structure>
mounts:
- <pod-mount-structure>
- 1
- The
name
field is a string that names theextraMounts
definition. This is for organizational purposes and cannot be referenced from other parts of the manifest. This is an optional attribute. - 2
- The
region
field is a string that defines the RHOSO region of theextraMounts
definition. This is an optional attribute. - 3
- The
propagation
field describes how the definition is applied. If thepropagation
field is not used, definitions propagate to every level below the level at which it is defined. This is an optional attribute. - 4
- The
extraVolType
field is a string that assists the administrator in categorizing or labeling the group of mounts that belong to theextraVol
entry of the list. There are no defined values for this parameter but the valuesCeph
,Nfs
, andUndefined
are common. This is an optional attribute. - 5
- The
volumes
field is a list that defines Red Hat OpenShift volume sources. This field has the same structure as thevolumes
section in a Pod. The structure is dependent on the type of volume being defined. The name defined in this section is used as a reference in themounts
section. - 6
- The
mounts
field is a list of mountpoints that represent the path where thevolumeSource
should be mounted in the Pod. The name of a volume from thevolumes
section is used as a reference as well as the path where it should be mounted. This attribute has the same structure as thevolumeMounts
attribute for a Pod.
2.1. Mounting external files using the extraMounts
attribute
Procedure
-
Open your
OpenStackControlPlane
custom resource (CR) file,openstack_control_plane.yaml
, on your workstation. Add the
extraMounts
attribute to theOpenStackControlPlane
CR service definition.The following example demonstrates adding the
extraMounts
attribute:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph
Copy to Clipboard Copied! Add the
propagation
field to specify where in the service definition theextraMount
attribute applies.The following example adds the
propagation
field to the previous example:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: ... extraMounts: - name: v1 region: r1 extraVol: - propagation: - Glance extraVolType: Ceph
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: ... extraMounts: - name: v1 region: r1 extraVol: - propagation:
1 - Glance extraVolType: Ceph
Copy to Clipboard Copied! - 1
- The
propagation
field can have one of the following values:Service level propagations:
-
Glance
-
Cinder
-
Manila
-
Horizon
-
Neutron
-
Component level propagations:
-
CinderAPI
-
CinderScheduler
-
CinderVolume
-
CinderBackup
-
GlanceAPI
-
ManilaAPI
-
ManilaScheduler
-
ManilaShare
-
NeutronAPI
-
Back-end propagation:
-
Any back-end in the
CinderVolume
,ManilaShare
, orGlanceAPI
maps.
-
Any back-end in the
Define the volume sources:
The following example demonstrates adding the
volumes
field to the previous example to provide a Red Hat Ceph Storage secret to the Image service (glance):apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph volumes:
1 - name: ceph secret: secretName: ceph-conf-files
Copy to Clipboard Copied! - 1
- The
volumes
field with the Red Hat Ceph Storage secret name.
Define where the different volumes are mounted within the Pod.
The following example demonstrates adding the
mounts
field to the previous example to provide the location and name of the file that contains the Red Hat Ceph Storage secret:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - name: v1 region: r1 extraVol: extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts:
1 - name: ceph mountPath: "/etc/ceph" readOnly: true
Copy to Clipboard Copied! - 1
- The
mounts
field with the location of the secrets file.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! Tip
Append the
-w
option to the end of theoc get
command to track deployment progress.The
OpenStackControlPlane
resources are created when the status is "Setup complete".Confirm that the control plane is deployed by reviewing the pods in the openstack namespace:
oc get pods -n openstack
$ oc get pods -n openstack
Copy to Clipboard Copied! The control plane is deployed when all the pods are either completed or running.
2.2. Mounting external files configuration examples
The following configuration examples demonstrate how the extraMounts
attribute is used to mount external files. The extraMounts
attribute is defined at either the top level custom resource (spec
) or the service definition.
Dashboard service (horizon)
This configuration example demonstrates using an external file to provide configuration to the Dashboard service.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: horizon: enabled: true template: customServiceConfig: '# add your customization here' extraMounts: - extraVol: - extraVolType: HorizonSettings mounts: - mountPath: /etc/openstack-dashboard/local_settings.d/_66_help_link.py name: horizon-config readOnly: true subPath: _66_help_link.py volumes: - name: horizon-config configMap: name: horizon-config
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
horizon:
enabled: true
template:
customServiceConfig: '# add your customization here'
extraMounts:
- extraVol:
- extraVolType: HorizonSettings
mounts:
- mountPath: /etc/openstack-dashboard/local_settings.d/_66_help_link.py
name: horizon-config
readOnly: true
subPath: _66_help_link.py
volumes:
- name: horizon-config
configMap:
name: horizon-config
Red Hat Ceph Storage
This configuration example defines the services that require access to the Red Hat Ceph Storage secret.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - CinderBackup - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
extraMounts:
- name: v1
region: r1
extraVol:
- propagation:
- CinderVolume
- CinderBackup
- GlanceAPI
- ManilaShare
extraVolType: Ceph
volumes:
- name: ceph
secret:
secretName: ceph-conf-files
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
Shared File Systems service (manila)
This configuration example provides external configuration files to the Shared File Systems service so that it can connect to a Red Hat Ceph Storage back end.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane apiVersion: core.openstack.org/v1beta1 spec: manila: template: ManilaShares: share1: ... extraMounts: - name: v1 region: r1 extraVol: - propagation: - share1 extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
apiVersion: core.openstack.org/v1beta1
spec:
manila:
template:
ManilaShares:
share1:
...
extraMounts:
- name: v1
region: r1
extraVol:
- propagation:
- share1
extraVolType: Ceph
volumes:
- name: ceph
secret:
secretName: ceph-conf-files
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
Image service (glance)
This configuration example connects three glanceAPI
instances to a different Red Hat Ceph Storage back end. The instances; api0
, api1
, and api2
; are connected to three different Red Hat Ceph Storage clusters that are named ceph0
, ceph1
, and ceph2
.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: api0 region: r1 extraVol: - propagation: - api0 volumes: - name: ceph0 secret: secretName: <secret_name> mounts: - name: ceph0 mountPath: "/etc/ceph" readOnly: true - name: api1 region: r1 extraVol: - propagation: - api1 volumes: - name: ceph1 secret: secretName: <secret_name> mounts: - name: ceph1 mountPath: "/etc/ceph" readOnly: true - name: api2 region: r1 extraVol: - propagation: - api2 volumes: - name: ceph2 secret: secretName: <secret_name> mounts: - name: ceph2 mountPath: "/etc/ceph" readOnly: true
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
extraMounts:
- name: api0
region: r1
extraVol:
- propagation:
- api0
volumes:
- name: ceph0
secret:
secretName: <secret_name>
mounts:
- name: ceph0
mountPath: "/etc/ceph"
readOnly: true
- name: api1
region: r1
extraVol:
- propagation:
- api1
volumes:
- name: ceph1
secret:
secretName: <secret_name>
mounts:
- name: ceph1
mountPath: "/etc/ceph"
readOnly: true
- name: api2
region: r1
extraVol:
- propagation:
- api2
volumes:
- name: ceph2
secret:
secretName: <secret_name>
mounts:
- name: ceph2
mountPath: "/etc/ceph"
readOnly: true
Chapter 3. Integrating Red Hat Ceph Storage
You can configure Red Hat OpenStack Services on OpenShift (RHOSO) to integrate with an external Red Hat Ceph Storage cluster. This configuration connects the following services to a Red Hat Ceph Storage cluster:
- Block Storage service (cinder)
- Image service (glance)
- Object Storage service (swift)
- Compute service (nova)
- Shared File Systems service (manila)
To configure Red Hat Ceph Storage as the back end for RHOSO storage, complete the following tasks:
- Verify that Red Hat Ceph Storage is deployed and all the required services are running.
- Create the Red Hat Ceph Storage pools on the Red Hat Ceph Storage cluster.
- Create a Red Hat Ceph Storage secret on the Red Hat Ceph Storage cluster to provide RHOSO services access to the Red Hat Ceph Storage cluster.
- Obtain the Ceph File System Identifier.
- Configure the OpenStackControlPlane CR to use the Red Hat Ceph Storage cluster as the back end.
- Configure the OpenStackDataPlane CR to use the Red Hat Ceph Storage cluster as the back end.
Prerequisites
- Access to a Red Hat Ceph Storage cluster.
- The RHOSO control plane is installed on an operational RHOSO cluster.
3.1. Creating Red Hat Ceph Storage pools
Create pools on the Red Hat Ceph Storage cluster server for each RHOSO service that uses the cluster.
Run the commands in this procedure from the Ceph node.
Procedure
Enter the
cephadm
container client:sudo cephadm shell
$ sudo cephadm shell
Copy to Clipboard Copied! Create pools for the Compute service (vms), the Block Storage service (volumes), and the Image service (images):
for P in vms volumes images; do
$ for P in vms volumes images; do ceph osd pool create $P; ceph osd pool application enable $P rbd; done
Copy to Clipboard Copied! NoteWhen you create the pools, set the appropriate placement group (PG) number, as described in Placement Groups in the Red Hat Ceph Storage Storage Strategies Guide.
Optional: Create the
cephfs
volume if the Shared File Systems service (manila) is enabled in the control plane. This automatically enables the CephFS Metadata service (MDS) and creates the necessary data and metadata pools on the Ceph cluster:ceph fs volume create cephfs
$ ceph fs volume create cephfs
Copy to Clipboard Copied! Optional: Deploy an NFS service on the Red Hat Ceph Storage cluster to use CephFS with NFS:
ceph nfs cluster create cephfs \ --ingress --virtual-ip=<vip> \ --ingress-mode=haproxy-protocol
$ ceph nfs cluster create cephfs \ --ingress --virtual-ip=<vip> \ --ingress-mode=haproxy-protocol
Copy to Clipboard Copied! Replace
<vip>
with the IP address assigned to the NFS service. The NFS service should be isolated on a network that can be shared with all Red Hat OpenStack users. See NFS cluster and export management, for more information about customizing the NFS service.ImportantWhen you deploy an NFS service for the Shared File Systems service, do not select a custom port to expose NFS. Only the default NFS port of 2049 is supported. You must enable the Red Hat Ceph Storage
ingress
service and set theingress-mode
tohaproxy-protocol
. Otherwise, you cannot use IP-based access rules with the Shared File Systems service. For security in production environments, do not provide access to0.0.0.0/0
on shares to mount them on client machines.
Create a cephx key for RHOSO to use to access pools:
ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images'
$ ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images'
Copy to Clipboard Copied! ImportantIf the Shared File Systems service is enabled in the control plane, replace
osd
caps with the following:ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.data'
$ ceph auth add client.openstack \ mgr 'allow *' \ mon 'profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.data'
Copy to Clipboard Copied! Export the cephx key:
ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyring
$ ceph auth get client.openstack > /etc/ceph/ceph.client.openstack.keyring
Copy to Clipboard Copied! Export the configuration file:
ceph config generate-minimal-conf > /etc/ceph/ceph.conf
$ ceph config generate-minimal-conf > /etc/ceph/ceph.conf
Copy to Clipboard Copied!
3.2. Creating a Red Hat Ceph Storage secret
Create a secret so that services can access the Red Hat Ceph Storage cluster.
Procedure
-
Transfer the
cephx
key and configuration file created in the Creating Red Hat Ceph Storage pools procedure to a host that can create resources in theopenstack
namespace. Base64 encode these files and store them in
KEY
andCONF
environment variables:KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0) CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)
$ KEY=$(cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0) $ CONF=$(cat /etc/ceph/ceph.conf | base64 -w 0)
Copy to Clipboard Copied! -
Create a YAML file to create the
Secret
resource. Using the environment variables, add the
Secret
configuration to the YAML file:apiVersion: v1 data: ceph.client.openstack.keyring: $KEY ceph.conf: $CONF kind: Secret metadata: name: ceph-conf-files namespace: openstack type: Opaque
apiVersion: v1 data: ceph.client.openstack.keyring: $KEY ceph.conf: $CONF kind: Secret metadata: name: ceph-conf-files namespace: openstack type: Opaque
Copy to Clipboard Copied! - Save the YAML file.
Create the
Secret
resource:oc create -f <secret_configuration_file>
$ oc create -f <secret_configuration_file>
Copy to Clipboard Copied! -
Replace
<secret_configuration_file>
with the name of the YAML file you created.
-
Replace
The examples in this section use openstack
as the name of the Red Hat Ceph Storage user. The file name in the Secret
resource must match this user name.
For example, if the file name used for the username openstack2
is /etc/ceph/ceph.client.openstack2.keyring
, then the secret data line should be ceph.client.openstack2.keyring: $KEY
.
3.3. Obtaining the Red Hat Ceph Storage File System Identifier
The Red Hat Ceph Storage File System Identifier (FSID) is a unique identifier for the cluster. The FSID is used in configuration and verification of cluster interoperability with RHOSO.
Procedure
Extract the FSID from the Red Hat Ceph Storage secret:
$ FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //')
3.4. Configuring the control plane to use the Red Hat Ceph Storage cluster
You must configure the OpenStackControlPlane
CR to use the Red Hat Ceph Storage cluster. Configuration includes the following tasks:
- Confirming the Red Hat Ceph Storage cluster and the associated services have the correct network configuration.
- Configuring the control plane to use the Red Hat Ceph Storage secret.
- Configuring the Image service (glance) to use the Red Hat Ceph Storage cluster.
- Configuring the Block Storage service (cinder) to use the Red Hat Ceph Storage cluster.
- Optional: Configuring the Shared File Systems service (manila) to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster.
This example does not include configuring Block Storage backup service (cinder-backup
) with Red Hat Ceph Storage.
Procedure
Check the storage interface defined in your
NodeNetworkConfigurationPolicy
(nncp
) custom resource to confirm that it has the same network configuration as thepublic_network
of the Red Hat Ceph Storage cluster. This is required to enable access to the Red Hat Ceph Storage cluster through theStorage
network. TheStorage
network should have the same network configuration as thepublic_network
of the Red Hat Ceph Storage cluster.It is not necessary for RHOSO to access the
cluster_network
of the Red Hat Ceph Storage cluster.NoteIf it does not impact workload performance, the
Storage
network can be different from the external Red Hat Ceph Storage clusterpublic_network
using routed (L3) connectivity as long as the appropriate routes are added to theStorage
network to reach the external Red Hat Ceph Storage clusterpublic_network
.Check the
networkAttachments
for the default Image service instance in theOpenStackControlPlane
CR to confirm that the default Image service is configured to access theStorage
network:glance: enabled: true template: databaseInstance: openstack storage: storageRequest: 10G glanceAPIs: default replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage
glance: enabled: true template: databaseInstance: openstack storage: storageRequest: 10G glanceAPIs: default replicas: 3 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage
Copy to Clipboard Copied! -
Confirm the Block Storage service is configured to access the
Storage
network through MetalLB. -
Optional: Confirm the Shared File Systems service is configured to access the
Storage
network through ManilaShare. -
Confirm the Compute service (nova) is configured to access the
Storage
network. -
Confirm the Red Hat Ceph Storage configuration file,
/etc/ceph/ceph.conf
, contains the IP addresses of the Red Hat Ceph Storage cluster monitors. These IP addresses must be within theStorage
network IP address range. -
Open your
openstack_control_plane.yaml
file to edit theOpenStackControlPlane
CR. Add the
extraMounts
parameter to define the services that require access to the Red Hat Ceph Storage secret.The following is an example of using the
extraMounts
parameter for this purpose. Only includeManilaShare
in the propagation list if you are using the Shared File Systems service (manila):apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: <ceph-conf-files> mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: <ceph-conf-files> mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
Copy to Clipboard Copied! -
Replace
<ceph-conf-files>
with the name of your Secret CR created in Creating a Red Hat Ceph Storage secret.
-
Replace
Add the
customServiceConfig
parameter to theglance
template to configure the Image service to use the Red Hat Ceph Storage cluster:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseAccount: glance secret: osp-secret storage: storageRequest: 10G extraMounts: - name: v1 region: r1 extraVol: - propagation: - GlanceAPI extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseAccount: glance secret: osp-secret storage: storageRequest: 10G extraMounts: - name: v1 region: r1 extraVol: - propagation: - GlanceAPI extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
Copy to Clipboard Copied! When you use Red Hat Ceph Storage as a back end for the Image service,
image-conversion
is enabled by default. For more information, see Planning storage and shared file systems in Planning your deployment.Add the
customServiceConfig
parameter to thecinder
template to configure the Block Storage service to use the Red Hat Ceph Storage cluster. For information about using Block Storage backups, see Configuring the Block Storage backup service.apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... cinder: template: cinderVolumes: ceph: customServiceConfig: | [DEFAULT] enabled_backends=ceph [ceph] volume_backend_name=ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False rbd_secret_uuid=$FSID
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... cinder: template: cinderVolumes: ceph: customServiceConfig: | [DEFAULT] enabled_backends=ceph [ceph] volume_backend_name=ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False rbd_secret_uuid=$FSID
1 Copy to Clipboard Copied! - 1
- Replace with the actual FSID. The FSID itself does not need to be considered secret. For more information, see Obtaining the Red Hat Ceph Storage FSID.
Optional: Add the
customServiceConfig
parameter to themanila
template to configure the Shared File Systems service to use native CephFS or CephFS-NFS with the Red Hat Ceph Storage cluster. For more information, see Configuring the Shared File Systems service (manila).The following example exposes native CephFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=cephfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfs [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFS
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=cephfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfs [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFS
Copy to Clipboard Copied! The following example exposes CephFS with NFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=nfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfsnfs [cephfsnfs] driver_handles_share_servers=False share_backend_name=cephfsnfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfs
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: ... manila: template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols=nfs manilaShares: share1: customServiceConfig: | [DEFAULT] enabled_share_backends=cephfsnfs [cephfsnfs] driver_handles_share_servers=False share_backend_name=cephfsnfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfs
Copy to Clipboard Copied! Apply the updates to the
OpenStackControlPlane
CR:oc apply -f openstack_control_plane.yaml
$ oc apply -f openstack_control_plane.yaml
Copy to Clipboard Copied!
3.5. Configuring the data plane to use the Red Hat Ceph Storage cluster
Configure the data plane to use the Red Hat Ceph Storage cluster.
Procedure
Create a
ConfigMap
with additional content for the Compute service (nova) configuration file/etc/nova/nova.conf.d/
inside thenova_compute
container. This additional content directs the Compute service to use Red Hat Ceph Storage RBD.apiVersion: v1 kind: ConfigMap metadata: name: ceph-nova data: 03-ceph-nova.conf: | [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=default_backend images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=$FSID
apiVersion: v1 kind: ConfigMap metadata: name: ceph-nova data: 03-ceph-nova.conf: |
1 [libvirt] images_type=rbd images_rbd_pool=vms images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_glance_store_name=default_backend images_rbd_glance_copy_poll_interval=15 images_rbd_glance_copy_timeout=600 rbd_user=openstack rbd_secret_uuid=$FSID
2 Copy to Clipboard Copied! - 1
- This file name must follow the naming convention of
##-<name>-nova.conf
. Files are evaluated by the Compute service alphabetically. A filename that starts with01
will be evaluated by the Compute service before a filename that starts with02
. When the same configuration option occurs in multiple files, the last one read wins. - 2
- The
$FSID
value should contain the actualFSID
as described in the Obtaining the Ceph FSID section. TheFSID
itself does not need to be considered secret.
Create a custom version of the default
nova
service to use the newConfigMap
, which in this case is calledceph-nova
.apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ceph spec: caCerts: combined-ca-bundle edpmServiceType: nova dataSources: - configMapRef: name: ceph-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ceph
1 spec: caCerts: combined-ca-bundle edpmServiceType: nova dataSources: - configMapRef: name: ceph-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova
Copy to Clipboard Copied! - 1
- The custom service is named
nova-custom-ceph
. It cannot be namednova
becausenova
is an unchangeable default service. Any custom service that has the same name as a default service name will be overwritten during reconciliation.
Apply the
ConfigMap
and custom service changes:oc create -f ceph-nova.yaml
$ oc create -f ceph-nova.yaml
Copy to Clipboard Copied! Update the
OpenStackDataPlaneNodeSet
services list to add theextraMounts
parameter to define access to the Ceph Storage secret and modify theservices
list. In theservices
list, replace thenova
service with the new custom service (in this case callednova-custom-ceph
).NoteThe following
OpenStackDataPlaneNodeSet
CR representation is an example and may not list all of the services in your environment. For a default list of services in your environment, use the following command:oc get -n openstack crd/openstackdataplanenodesets.dataplane.openstack.org -o yaml |yq -r '.spec.versions.[].schema.openAPIV3Schema.properties.spec.properties.services.default
oc get -n openstack crd/openstackdataplanenodesets.dataplane.openstack.org -o yaml |yq -r '.spec.versions.[].schema.openAPIV3Schema.properties.spec.properties.services.default
Copy to Clipboard Copied! For more information, see Data plane services.
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: ... roles: edpm-compute: ... services: - configure-network - validate-network - install-os - configure-os - run-os - ceph-client - ovn - libvirt - nova-custom-ceph - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet spec: ... roles: edpm-compute: ... services: - configure-network - validate-network - install-os - configure-os - run-os - ceph-client - ovn - libvirt - nova-custom-ceph - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true
Copy to Clipboard Copied! NoteYou must add the
ceph-client
service before you add theovn
,libvirt
, andnova-custom-ceph
services. Theceph-client
service configures data plane nodes as clients of a Red Hat Ceph Storage server by distributing the Red Hat Ceph Storage client files.- Save the changes to the services list.
Create an
OpenStackDataPlaneDeployment
CR:oc create -f <dataplanedeployment_cr_file>
$ oc create -f <dataplanedeployment_cr_file>
Copy to Clipboard Copied! -
Replace
<dataplanedeployment_cr_file>
with the name of your file.
-
Replace
Result
The nova-custom-ceph
service Ansible job copies overrides from the ConfigMaps
to the Compute service hosts. The Ansible job also uses virsh secret-*
commands so the libvirt
service retrieves the cephx
secret by FSID
.
Run the following command on a data plane node after the job completes to confirm the job results:
podman exec libvirt_virtsecretd virsh secret-get-value $FSID
$ podman exec libvirt_virtsecretd virsh secret-get-value $FSID
Copy to Clipboard Copied!
3.6. Configuring an external Ceph Object Gateway back end
You can configure an external Ceph Object Gateway (RGW) to act as an Object Storage service (swift) back end, by completing the following high-level tasks:
- Configure the RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
- Deploy and configure a RGW service to handle object storage requests.
You use the openstack
client tool to configure the Object Storage service.
3.6.1. Configuring RGW authentication
You must configure RGW to verify users and their roles in the Identity service (keystone) to authenticate with the external RGW service.
Prerequisites
- You have deployed an operational OpenStack control plane.
Procedure
Create the Object Storage service on the control plane:
openstack service create --name swift --description "OpenStack Object Storage" object-store
$ openstack service create --name swift --description "OpenStack Object Storage" object-store
Copy to Clipboard Copied! Create a user called
swift
:openstack user create --project service --password <swift_password> swift
$ openstack user create --project service --password <swift_password> swift
Copy to Clipboard Copied! -
Replace
<swift_password>
with the password to assign to theswift
user.
-
Replace
Create roles for the
swift
user:openstack role create swiftoperator openstack role create ResellerAdmin
$ openstack role create swiftoperator $ openstack role create ResellerAdmin
Copy to Clipboard Copied! Add the
swift
user to system roles:openstack role add --user swift --project service member openstack role add --user swift --project service admin
$ openstack role add --user swift --project service member $ openstack role add --user swift --project service admin
Copy to Clipboard Copied! Export the RGW endpoint IP addresses to variables and create control plane endpoints:
export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage> export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external> openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s; openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;
$ export RGW_ENDPOINT_STORAGE=<rgw_endpoint_ip_address_storage> $ export RGW_ENDPOINT_EXTERNAL=<rgw_endpoint_ip_address_external> $ openstack endpoint create --region regionOne object-store public http://$RGW_ENDPOINT_EXTERNAL:8080/swift/v1/AUTH_%\(tenant_id\)s; $ openstack endpoint create --region regionOne object-store internal http://$RGW_ENDPOINT_STORAGE:8080/swift/v1/AUTH_%\(tenant_id\)s;
Copy to Clipboard Copied! -
Replace
<rgw_endpoint_ip_address_storage>
with the IP address of the RGW endpoint on the storage network. This is how internal services will access RGW. Replace
<rgw_endpoint_ip_address_external>
with the IP address of the RGW endpoint on the external network. This is how cloud users will write objects to RGW.NoteBoth endpoint IP addresses are the endpoints that represent the Virtual IP addresses, owned by
haproxy
andkeepalived
, used to reach the RGW backends that will be deployed in the Red Hat Ceph Storage cluster in the procedure Configuring and Deploying the RGW service.
-
Replace
Add the
swiftoperator
role to the control planeadmin
group:openstack role add --project admin --user admin swiftoperator
$ openstack role add --project admin --user admin swiftoperator
Copy to Clipboard Copied!
3.6.2. Configuring and deploying the RGW service
Configure and deploy a RGW service to handle object storage requests.
Procedure
- Log in to a Red Hat Ceph Storage Controller node.
Create a file called
/tmp/rgw_spec.yaml
and add the RGW deployment parameters:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - <host_1> - <host_2> ... - <host_n> networks: - <storage_network> spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default --- service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_ips_list: - <storage_network_vip> - <external_network_vip> virtual_interface_networks: - <storage_network>
service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - <host_1> - <host_2> ... - <host_n> networks: - <storage_network> spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default --- service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_ips_list: - <storage_network_vip> - <external_network_vip> virtual_interface_networks: - <storage_network>
Copy to Clipboard Copied! -
Replace
<host_1>
,<host_2>
, …,<host_n>
with the name of the Ceph nodes where the RGW instances are deployed. -
Replace
<storage_network>
with the network range used to resolve the interfaces whereradosgw
processes are bound. -
Replace
<storage_network_vip>
with the virtual IP (VIP) used as thehaproxy
front end. This is the same address configured as the Object Storage service endpoint ($RGW_ENDPOINT
) in the Configuring RGW authentication procedure. -
Optional: Replace
<external_network_vip>
with an additional VIP on an external network to use as thehaproxy
front end. This address is used to connect to RGW from an external network.
-
Replace
- Save the file.
Enter the cephadm shell and mount the
rgw_spec.yaml
file.cephadm shell -m /tmp/rgw_spec.yaml
$ cephadm shell -m /tmp/rgw_spec.yaml
Copy to Clipboard Copied! Add RGW related configuration to the cluster:
ceph config set global rgw_keystone_url "https://<keystone_endpoint>" ceph config set global rgw_keystone_verify_ssl false ceph config set global rgw_keystone_api_version 3 ceph config set global rgw_keystone_accepted_roles "member, Member, admin" ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator" ceph config set global rgw_keystone_admin_domain default ceph config set global rgw_keystone_admin_project service ceph config set global rgw_keystone_admin_user swift ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD" ceph config set global rgw_keystone_implicit_tenants true ceph config set global rgw_s3_auth_use_keystone true ceph config set global rgw_swift_versioning_enabled true ceph config set global rgw_swift_enforce_content_length true ceph config set global rgw_swift_account_in_url true ceph config set global rgw_trust_forwarded_https true ceph config set global rgw_max_attr_name_len 128 ceph config set global rgw_max_attrs_num_in_req 90 ceph config set global rgw_max_attr_size 1024
$ ceph config set global rgw_keystone_url "https://<keystone_endpoint>" $ ceph config set global rgw_keystone_verify_ssl false $ ceph config set global rgw_keystone_api_version 3 $ ceph config set global rgw_keystone_accepted_roles "member, Member, admin" $ ceph config set global rgw_keystone_accepted_admin_roles "ResellerAdmin, swiftoperator" $ ceph config set global rgw_keystone_admin_domain default $ ceph config set global rgw_keystone_admin_project service $ ceph config set global rgw_keystone_admin_user swift $ ceph config set global rgw_keystone_admin_password "$SWIFT_PASSWORD" $ ceph config set global rgw_keystone_implicit_tenants true $ ceph config set global rgw_s3_auth_use_keystone true $ ceph config set global rgw_swift_versioning_enabled true $ ceph config set global rgw_swift_enforce_content_length true $ ceph config set global rgw_swift_account_in_url true $ ceph config set global rgw_trust_forwarded_https true $ ceph config set global rgw_max_attr_name_len 128 $ ceph config set global rgw_max_attrs_num_in_req 90 $ ceph config set global rgw_max_attr_size 1024
Copy to Clipboard Copied! -
Replace
<keystone_endpoint>
with the Identity service internal endpoint. The data plane nodes can resolve the internal endpoint but not the public one. Do not omit the URIScheme from the URL, it must be eitherhttp://
orhttps://
. -
Replace
<swift_password>
with the password assigned to the swift user in the previous step.
-
Replace
Deploy the RGW configuration using the Orchestrator:
ceph orch apply -i /mnt/rgw_spec.yaml
$ ceph orch apply -i /mnt/rgw_spec.yaml
Copy to Clipboard Copied!
3.7. Configuring RGW with TLS for an external Red Hat Ceph Storage cluster
Configure RGW with TLS so the control plane services can resolve the external Red Hat Ceph Storage cluster host names.
This procedure configures Ceph RGW to emulate the Object Storage service (swift). It creates a DNS zone and certificate so that a URL such as https://rgw-external.ceph.local:8080
is registered as an Identity service (keystone) endpoint. This enables Red Hat OpenStack Services on OpenShift (RHOSO) clients to resolve the host and trust the certificate.
Because a RHOSO pod needs to securely access an HTTPS endpoint hosted outside of Red Hat OpenShift Container Platform (RHOCP), this process is used to create a DNS domain and certificate for that endpoint.
During this procedure, a DNSData
domain is created, ceph.local
in the examples, so that pods can map host names to IP addresses for services that are not hosted on RHOCP. DNS forwarding is then configured for the domain with the CoreDNS
service. Lastly, a certificate is created using the RHOSO public root certificate authority.
You must copy the certificate and key file created in RHOCP to the nodes hosting RGW so they can become part of the Ceph Orchestrator RGW specification.
Procedure
Create a
DNSData
custom resource (CR) for the external Ceph cluster.NoteCreating a
DNSData
CR creates a newdnsmasq
pod that is able to read and resolve the DNS information in the associatedDNSData
CR.The following is an example of a
DNSData
CR:apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: labels: component: ceph-storage service: ceph name: ceph-storage namespace: openstack spec: dnsDataLabelSelectorValue: dnsdata hosts: - hostnames: - ceph-rgw-internal-vip.ceph.local ip: 172.18.0.2 - hostnames: - ceph-rgw-external-vip.ceph.local ip: 10.10.10.2
apiVersion: network.openstack.org/v1beta1 kind: DNSData metadata: labels: component: ceph-storage service: ceph name: ceph-storage namespace: openstack spec: dnsDataLabelSelectorValue: dnsdata hosts: - hostnames: - ceph-rgw-internal-vip.ceph.local ip: 172.18.0.2 - hostnames: - ceph-rgw-external-vip.ceph.local ip: 10.10.10.2
Copy to Clipboard Copied! NoteIn this example, it is assumed that the host at the IP address
172.18.0.2
hosts the Ceph RGW endpoint for access on the private storage network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host nameceph-rgw-internal-vip.ceph.local
.It is also assumed that the host at the IP address
10.10.10.2
hosts the Ceph RGW endpoint for access on the external network. This host passes the CR so that a DNS A and PTR record is created. This enables the host to be accessed by using the host nameceph-rgw-external-vip.ceph.local
.The list of hosts in this example is not a definitive list of required hosts. It is provided for demonstration purposes. Substitute the appropriate hosts for your environment.
Apply the CR to your environment:
oc apply -f <ceph_dns_yaml>
$ oc apply -f <ceph_dns_yaml>
Copy to Clipboard Copied! -
Replace
<ceph_dns_yaml>
with the name of theDNSData
CR file.
-
Replace
-
Update the
CoreDNS
CR with a forwarder to thednsmasq
for requests to theceph.local
domain. For more information about DNS forwarding, see Using DNS forwarding in the RHOCP Networking guide. List the
openstack
domain DNS cluster IP:oc get svc dnsmasq-dns
$ oc get svc dnsmasq-dns
Copy to Clipboard Copied! The following is an example output for this command:
oc get svc dnsmasq-dns
$ oc get svc dnsmasq-dns dnsmasq-dns LoadBalancer 10.217.5.130 192.168.122.80 53:30185/UDP 160m
Copy to Clipboard Copied! - Record the forwarding information from the command output.
List the
CoreDNS
CR:oc -n openshift-dns describe dns.operator/default
$ oc -n openshift-dns describe dns.operator/default
Copy to Clipboard Copied! Edit the
CoreDNS
CR and update it with the forwarding information.The following is an example of a
CoreDNS
CR updated with forwarding information:apiVersion: operator.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2024-03-25T02:49:24Z" finalizers: - dns.operator.openshift.io/dns-controller generation: 3 name: default resourceVersion: "164142" uid: 860b0e61-a48a-470e-8684-3b23118e6083 spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53 name: ceph zones: - ceph.local upstreamResolvers: policy: Sequential upstreams: - port: 53 type: SystemResolvConf
apiVersion: operator.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2024-03-25T02:49:24Z" finalizers: - dns.operator.openshift.io/dns-controller generation: 3 name: default resourceVersion: "164142" uid: 860b0e61-a48a-470e-8684-3b23118e6083 spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53 name: ceph zones: - ceph.local upstreamResolvers: policy: Sequential upstreams: - port: 53 type: SystemResolvConf
Copy to Clipboard Copied! The following is what has been added to the CR:
.... servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53 name: ceph zones: - ceph.local ....
.... servers: - forwardPlugin: policy: Random upstreams: - 10.217.5.130:53
1 name: ceph zones: - ceph.local ....
Copy to Clipboard Copied! - 1
- The forwarding information recorded from the
oc get svc dnsmasq-dns
command.
Create a
Certificate
CR with the host names from theDNSData
CR.The following is an example of a
Certificate
CR:apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: cert-ceph-rgw namespace: openstack spec: duration: 43800h0m0s issuerRef: {'group': 'cert-manager.io', 'kind': 'Issuer', 'name': 'rootca-public'} secretName: cert-ceph-rgw dnsNames: - ceph-rgw-internal-vip.ceph.local - ceph-rgw-external-vip.ceph.local
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: cert-ceph-rgw namespace: openstack spec: duration: 43800h0m0s issuerRef: {'group': 'cert-manager.io', 'kind': 'Issuer', 'name': 'rootca-public'} secretName: cert-ceph-rgw dnsNames: - ceph-rgw-internal-vip.ceph.local - ceph-rgw-external-vip.ceph.local
Copy to Clipboard Copied! NoteThe certificate
issuerRef
is set to the root certificate authority (CA) of RHOSO. This CA is automatically created when the control plane is deployed. The default name of the CA isrootca-public
. The RHOSO pods trust this new certificate because the root CA is used.Apply the CR to your environment:
oc apply -f <ceph_cert_yaml>
$ oc apply -f <ceph_cert_yaml>
Copy to Clipboard Copied! -
Replace
<ceph_cert_yaml>
with the name of theCertificate
CR file.
-
Replace
Extract the certificate and key data from the secret created when the
Certificate
CR was applied:oc get secret <ceph_cert_secret_name> -o yaml
$ oc get secret <ceph_cert_secret_name> -o yaml
Copy to Clipboard Copied! Replace
<ceph_cert_secret_name>
with the name used in thesecretName
field of yourCertificate
CR.NoteThis command outputs YAML with a data section that looks like the following:
oc get secret cert-ceph-rgw -o yaml
[stack@osp-storage-04 ~]$ oc get secret cert-ceph-rgw -o yaml apiVersion: v1 data: ca.crt: <CA> tls.crt: <b64cert> tls.key: <b64key> kind: Secret
Copy to Clipboard Copied! The <b64cert> and <b64key> values are the base64-encoded certificate and key strings that you must use in the next step.
Extract and base64 decode the certificate and key information obtained in the previous step and save a concatenation of them in the Ceph Object Gateway service specification.
The
rgw
section of the the specification file looks like the following:service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - host1 - host2 networks: - 172.18.0.0/24 spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default ssl: true rgw_frontend_ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----
service_type: rgw service_id: rgw service_name: rgw.rgw placement: hosts: - host1 - host2 networks: - 172.18.0.0/24 spec: rgw_frontend_port: 8082 rgw_realm: default rgw_zone: default ssl: true rgw_frontend_ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----
Copy to Clipboard Copied! The
ingress
section of the specification file looks like the following:service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_interface_networks: - 172.18.0.0/24 virtual_ip: 172.18.0.2/24 ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----
service_type: ingress service_id: rgw.default service_name: ingress.rgw.default placement: count: 1 spec: backend_service: rgw.rgw frontend_port: 8080 monitor_port: 8999 virtual_interface_networks: - 172.18.0.0/24 virtual_ip: 172.18.0.2/24 ssl_cert: | -----BEGIN CERTIFICATE----- MIIDkzCCAfugAwIBAgIRAKNgGd++xV9cBOrwDAeEdQUwDQYJKoZIhvcNAQELBQAw <redacted> -----BEGIN RSA PRIVATE KEY----- MIIEpQIBAAKCAQEAyTL1XRJDcSuaBLpqasAuLsGU2LQdMxuEdw3tE5voKUNnWgjB <redacted> -----END RSA PRIVATE KEY-----
Copy to Clipboard Copied! In the above examples, the
rgw_frontend_ssl_certificate
andssl_cert
contain the base64 decoded values from both the<b64cert>
and<b64key>
in the previous step with no spaces in between.- Use the procedure Deploying the Ceph Object Gateway using the service specification to deploy Ceph RGW with SSL.
-
Connect to the
openstackclient
pod. Verify that the forwarding information has been successfully updated.
curl --trace - <host_name>
$ curl --trace - <host_name>
Copy to Clipboard Copied! Replace
<host_name>
with the name of the external host previously added to theDNSData
CR.NoteThe following is an example output from this command where the
openstackclient
pod successfully resolved the host name, and no SSL verification errors were encountered.curl https://rgw-external-vip.ceph.local:8080
sh-5.1$ curl https://rgw-external-vip.ceph.local:8080 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult> .1$ sh-5.1$
Copy to Clipboard Copied!
3.8. Enabling deferred deletion for volumes or images with dependencies
When you use Ceph RBD as a back end for the Block Storage service (cinder) or the Image service (glance), you can enable deferred deletion in the Ceph RBD Clone v2 API.
With deferred deletion, you can delete a volume from the Block Storage service or an image from the Image service, even if Ceph RBD volumes or snapshots depend on them. For example, COW clones created in different storage pools by the Block Storage service or the Compute service (nova). The volume is deleted from the Block Storage service or the image is deleted from the Image service, but it is still stored in a trash area in Ceph RBD for dependencies. The volume or image is only deleted from Ceph RBD when there are no dependencies.
The trash area maintained by deferred deletion does not provide restoration functionality. When volumes or images are moved to the trash area, they cannot be recovered or restored. The trash area serves only as a holding mechanism for the volume or image until all dependencies have been removed. The volume or image will be permanently deleted once no dependencies exist.
Limitations
- When you enable Clone v2 deferred deletion in existing environments, the feature only applies to new volumes or images.
Procedure
Verify which Ceph version the clients in your Ceph Storage cluster are running:
cephadm shell -- ceph osd get-require-min-compat-client
$ cephadm shell -- ceph osd get-require-min-compat-client
Copy to Clipboard Copied! Example output:
luminous
luminous
Copy to Clipboard Copied! To set the cluster to use the Clone v2 API and the deferred deletion feature by default, set
min-compat-client
tomimic
. Only clients in the cluster that are running Ceph version 13.2.x (Mimic) can access images with dependencies:cephadm shell -- ceph osd set-require-min-compat-client mimic
$ cephadm shell -- ceph osd set-require-min-compat-client mimic
Copy to Clipboard Copied! Schedule an interval for
trash purge
in minutes by using them
suffix:rbd trash purge schedule add --pool <pool> <30m>
$ rbd trash purge schedule add --pool <pool> <30m>
Copy to Clipboard Copied! -
Replace
<pool>
with the name of the associated storage pool, for example,volumes
in the Block Storage service. -
Replace
<30m>
with the interval in minutes that you want to specify fortrash purge
.
-
Replace
Verify a trash purge schedule has been set for the pool:
rbd trash purge schedule list --pool <pool>
$ rbd trash purge schedule list --pool <pool>
Copy to Clipboard Copied!
3.9. Troubleshooting Red Hat Ceph Storage RBD integration
The Compute (nova), Block Storage (cinder), and Image (glance) services can integrate with Red Hat Ceph Storage RBD to use it as a storage back end. If this integration does not work as expected, you can perform an incremental troubleshooting procedure to progressively eliminate possible causes.
The following example shows how to troubleshoot an Image service integration. You can adapt the same steps to troubleshoot Compute and Block Storage service integrations.
If you discover the cause of your issue before completing this procedure, it is not necessary to do any subsequent steps. You can exit this procedure and resolve the issue.
Procedure
Determine if any parts of the control plane are not properly deployed by assessing whether the
Ready
condition is notTrue
:oc get -n openstack OpenStackControlPlane \ -o jsonpath="{range .items[0].status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
$ oc get -n openstack OpenStackControlPlane \ -o jsonpath="{range .items[0].status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
Copy to Clipboard Copied! If you identify a service that is not properly deployed, check the status of the service.
The following example checks the status of the Compute service:
oc get -n openstack Nova/nova \ -o jsonpath="{range .status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
$ oc get -n openstack Nova/nova \ -o jsonpath="{range .status.conditions[?(@.status!='True')]}{.type} is {.status} due to {.message}{'\n'}{end}"
Copy to Clipboard Copied! NoteYou can check the status of all deployed services with the command
oc get pods -n openstack
and the logs of a specific service with the commandoc logs -n openstack <service_pod_name>
. Replace<service_pod_name>
with the name of the service pod you want to check.If you identify an operator that is not properly deployed, check the status of the operator:
oc get pods -n openstack-operators -lopenstack.org/operator-name
$ oc get pods -n openstack-operators -lopenstack.org/operator-name
Copy to Clipboard Copied! NoteCheck the operator logs with the command
oc logs -n openstack-operators -lopenstack.org/operator-name=<operator_name>
.
Check the
Status
of the data plane deployment:oc get -n openstack OpenStackDataPlaneDeployment
$ oc get -n openstack OpenStackDataPlaneDeployment
Copy to Clipboard Copied! If the
Status
of the data plane deployment isFalse
, check the logs of the associated Ansible job:oc logs -n openstack job/<ansible_job_name>
$ oc logs -n openstack job/<ansible_job_name>
Copy to Clipboard Copied! Replace
<ansible_job_name>
with the name of the associated job. The job name is listed in theMessage
field ofoc get -n openstack OpenStackDataPlaneDeployment
command.
Check the
Status
of the data plane node set deployment:oc get -n openstack OpenStackDataPlaneNodeSet
$ oc get -n openstack OpenStackDataPlaneNodeSet
Copy to Clipboard Copied! If the
Status
of the data plane node set deployment isFalse
, check the logs of the associated Ansible job:oc logs -n openstack job/<ansible_job_name>
$ oc logs -n openstack job/<ansible_job_name>
Copy to Clipboard Copied! -
Replace
<ansible_job_name>
with the name of the associated job. It is listed in theMessage
field ofoc get -n openstack OpenStackDataPlaneNodeSet
command.
-
Replace
If any pods are in the
CrashLookBackOff
state, you can duplicate them for troublehooting purposes with theoc debug
command:oc debug <pod_name>
oc debug <pod_name>
Copy to Clipboard Copied! Replace
<pod_name>
with the name of the pod to duplicate.TipYou can also use the
oc debug
command in the following object debugging activities:-
To run
/bin/sh
on a container other than the first one, the commands default behavior, using the command formoc debug -container <pod_name> <container_name>
. This is useful for pods like the API where the first container is tailing a file and the second container is the one you want to debug. If you use this command form, you must first use the commandoc get pods | grep <search_string>
to find the container name. -
To route traffic to the pod during the debug process, use the command form
oc debug <pod_name> --keep-labels=true
. -
To create any resource that creates pods such as
Deployments
,StatefulSets
, andNodes
, use the command formoc debug <resource_type>/<resource_name>
. An example of creating aStatefulSet
would beoc debug StatefulSet/cinder-scheduler
.
-
To run
Connect to the pod and confirm that the
ceph.client.openstack.keyring
andceph.conf
files are present in the/etc/ceph
directory.NoteIf the pod is in a
CrashLookBackOff
state, use theoc debug
command as described in the previous step to duplicate the pod and route traffic to it.oc rsh <pod_name>
$ oc rsh <pod_name>
Copy to Clipboard Copied! Replace
<pod_name>
with the name of the applicable pod.TipIf the Ceph configuration files are missing, check the
extraMounts
parameter in yourOpenStackControlPlane
CR.
Confirm the pod has a network connection to the Red Hat Ceph Storage cluster by connecting to the IP and port of a Ceph Monitor from the pod. The IP and port information is located in
/etc/ceph.conf
.The following is an example of this process:
oc get pods | grep glance | grep external-api-0 oc debug --container glance-api glance-06f7a-default-external-api-0 Ansible managed >> import socket >> s = socket.socket() >> ip="192.168.122.100" >> port=3300 >> s.connect((ip,port)) >>
$ oc get pods | grep glance | grep external-api-0 glance-06f7a-default-external-api-0 3/3 Running 0 2d3h $ oc debug --container glance-api glance-06f7a-default-external-api-0 Starting pod/glance-06f7a-default-external-api-0-debug-p24v9, command was: /usr/bin/dumb-init --single-child -- /bin/bash -c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start Pod IP: 192.168.25.50 If you don't see a command prompt, try pressing enter. sh-5.1# cat /etc/ceph/ceph.conf # Ansible managed [global] fsid = 63bdd226-fbe6-5f31-956e-7028e99f1ee1 mon host = [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] [client.libvirt] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/ceph/qemu-guest-$pid.log sh-5.1# python3 Python 3.9.19 (main, Jul 18 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> s = socket.socket() >>> ip="192.168.122.100" >>> port=3300 >>> s.connect((ip,port)) >>>
Copy to Clipboard Copied! TipTroubleshoot the network connection between the cluster and pod if you cannot connect to a Ceph Monitor. The previous example uses a Python socket to connect to the IP and port of the Red Hat Ceph Storage cluster from the
ceph.conf
file.There are two potential outcomes from the execution of the
s.connect((ip,port))
function:- If the command executes successfully and there is no error similar to the following example, the network connection between the pod and cluster is functioning correctly. If the connection is functioning correctly, the command execution will provide no return value at all.
- If the command takes a long time to execute and returns an error similar to the following example, the network connection between the pod and cluster is not functioning correctly. It should be investigated further to troubleshoot the connection.
Traceback (most recent call last): File "<stdin>", line 1, in <module> TimeoutError: [Errno 110] Connection timed out
Traceback (most recent call last): File "<stdin>", line 1, in <module> TimeoutError: [Errno 110] Connection timed out
Copy to Clipboard Copied! Examine the
cephx
key as shown in the following example:bash-5.1$ cat /etc/ceph/ceph.client.openstack.keyring [client.openstack] key = "<redacted>" caps mgr = allow * caps mon = profile rbd caps osd = profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images bash-5.1$
bash-5.1$ cat /etc/ceph/ceph.client.openstack.keyring [client.openstack] key = "<redacted>" caps mgr = allow * caps mon = profile rbd caps osd = profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images bash-5.1$
Copy to Clipboard Copied! List the contents of a pool from the
caps osd
parameter as shown in the following example:/usr/bin/rbd --conf /etc/ceph/ceph.conf \ --keyring /etc/ceph/ceph.client.openstack.keyring \ --cluster ceph --id openstack \ ls -l -p <pool_name> | wc -l
$ /usr/bin/rbd --conf /etc/ceph/ceph.conf \ --keyring /etc/ceph/ceph.client.openstack.keyring \ --cluster ceph --id openstack \ ls -l -p <pool_name> | wc -l
Copy to Clipboard Copied! Replace
<pool_name>
with the name of the required Red Hat Ceph Storage pool.TipIf this command returns the number 0 or greater, the
cephx
key provides adequate permissions to connect to, and read information from, the Red Hat Ceph Storage cluster.If this command does not complete but network connectivity to the cluster was confirmed, work with the Ceph administrator to obtain the correct
cephx
keyring.Additionally, it is possible there is an MTU mismatch on the Storage network. If the network is using jumbo frames (an MTU value of 9000), all switch ports between servers using the interface must be updated to support jumbo frames. If this change is not made on the switch, problems can occur at the Ceph application layer. Verify all hosts using the network can communicate at the desired MTU with a command such as
ping -M do -s 8972 <ip_address>
.
Send test data to the
images
pool on the Ceph cluster.The following is an example of performing this task:
DATA=$(date | md5sum | cut -c-12) POOL=images RBD="/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack" $RBD create --size 1024 $POOL/$DATA
# DATA=$(date | md5sum | cut -c-12) # POOL=images # RBD="/usr/bin/rbd --conf /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring --cluster ceph --id openstack" # $RBD create --size 1024 $POOL/$DATA
Copy to Clipboard Copied! TipIt is possible to be able to read data from the cluster but not have permissions to write data to it, even if write permission was granted in the
cephx
keyring. If write permissions have been granted but you cannot write data to the cluster, this may indicate the cluster is overloaded and not able to write new data.In the example, the
rbd
command did not complete successfully and was canceled. It was subsequently confirmed the cluster itself did not have the resources to write new data. The issue was resolved on the cluster itself. There was nothing incorrect with the client configuation.
3.10. Troubleshooting Red Hat Ceph Storage clients
Put Red Hat OpenStack Services on OpenShift (RHOSO) Ceph clients in debug mode to troubleshoot their operation.
Procedure
- Locate the Red Hat Ceph Storage configuration file mapped in the Red Hat OpenShift secret created in Creating a Red Hat Ceph Storage secret.
Modify the contents of the configuration file to include troubleshooting-related configuration.
The following is an example of troubleshooting-related configuration:
[client.openstack] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/guest-$pid.log debug ms = 1 debug rbd = 20 log to file = true
[client.openstack] admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/guest-$pid.log debug ms = 1 debug rbd = 20 log to file = true
Copy to Clipboard Copied! NoteThis is not an exhaustive example of troubleshooting-related configuration. For more information, see Troubleshooting Red Hat Ceph Storage.
- Update the secret with the new content.
3.11. Customizing and managing Red Hat Ceph Storage
Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7 and 8. For information on the customization and management of Red Hat Ceph Storage 7 and 8, refer to the applicable documentation sets:
The following guides contain key information and procedures for these tasks:
Red Hat Ceph Storage 7
Red Hat Ceph Storage 8
Chapter 4. Configuring the Block Storage service (cinder)
The Block Storage service (cinder) provides access to remote block storage devices through volumes to provide persistent storage. The Block Storage service has three mandatory services; api
, scheduler
, and volume
; and one optional service, backup
.
As a security hardening measure, the Block Storage services run as the cinder
user.
All Block Storage services use the cinder
section of the OpenStackControlPlane
custom resource (CR) for their configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
Global configuration options are applied directly under the cinder
and template
sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: <global-options> template: <global-options> cinderAPI: <cinder-api-options> cinderScheduler: <cinder-scheduler-options> cinderVolumes: <name1>: <cinder-volume-options> <name2>: <cinder-volume-options> cinderBackup: <cinder-backup-options>
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
<global-options>
template:
<global-options>
cinderAPI:
<cinder-api-options>
cinderScheduler:
<cinder-scheduler-options>
cinderVolumes:
<name1>: <cinder-volume-options>
<name2>: <cinder-volume-options>
cinderBackup:
<cinder-backup-options>
4.1. Terminology
The following terms are important to understanding the Block Storage service (cinder):
- Storage back end: A physical storage system where volume data is stored.
-
Cinder driver: The part of the Block Storage service that enables communication with the storage back end. It is configured with the
volume_driver
andbackup_driver
options. -
Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the
volume_backend_name
option. - Storage pool: A logical grouping of volumes in a given storage back end.
- Cinder pool: A representation in the Block Storage service of a storage pool.
-
Volume host: The way the Block Storage service address volumes. There are two different representations, short (
<hostname>@<backend-name>
) and full (<hostname>@<backend-name>#<pool-name>
). - Quota: Limits defined per project to constrain the use of Block Storage specific resources.
4.2. Block Storage service (cinder) enhancements in Red Hat OpenStack Services on OpenShift (RHOSO)
The following functionality enhancements have been integrated into the Block Storage service:
- Ease of deployment for multiple volume back ends.
- Back end deployment does not affect running volume back ends.
- Back end addition and removal does not affect running back ends.
- Back end configuration changes do not affect other running back ends.
- Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers.
- Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality.
- Improved methods for troubleshooting the service code.
4.3. Configuring transport protocols
Deployments use different transport protocols to connect to volumes. The Block Storage service (cinder) supports the following transport protocols:
- iSCSI
- Fibre Channel (FC)
- NVMe over TCP (NVMe-TCP)
- NFS
- Red Hat Ceph Storage RBD
Control plane services that use volumes, such as the Block Storage volume
and backup
services, might require the support of the Red Hat OpenShift Container Platform (RHOCP) cluster to use iscsid
and multipathd
modules, depending on the storage array in use. These modules must be available on all nodes where these volume-dependent services execute. To use these transport protocols, create a MachineConfig
CR to define where these modules execute. For more information on a MachineConfig
, see Understanding the Machine Config operator.
Using a MachineConfig
CR to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a 'MachineConfig` CR to ensure the integrity of RHOCP workloads.
The procedures in this section provide a general configuration of these protocols and are not vendor-specific.
If your deployment requires multipathing, then you must configure this separately, see Configuring multipathing.
The Block Storage volume
and backup
services are automatically started on data plane nodes.
4.3.1. Configuring the iSCSI protocol
Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid
service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig
to the applicable nodes to configure nodes to use the iSCSI protocol.
If the iscsid
service module is already running, this procedure is not required.
Procedure
Create a
MachineConfig
CR to configure the nodes for theiscsid
module.The following example starts the
iscsid
service with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service
Copy to Clipboard Copied! - Save the file.
Apply the
MachineConfig
CR file.oc apply -f <machine_config_file> -n openstack
$ oc apply -f <machine_config_file> -n openstack
Copy to Clipboard Copied! -
Replace
<machine_config_file>
with the name of yourMachineConfig
CR file.
-
Replace
4.3.2. Configuring the Fibre Channel protocol
There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector
in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end.
4.3.3. Configuring the NVMe over TCP (NVMe-TCP) protocol
Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme
kernel modules.
Procedure
Create a
MachineConfig
CR to configure the nodes for thenvme
kernel modules.The following example starts the
nvme
kernel modules with a default configuration in all RHOCP worker nodes:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false mode: 420 user: name: root group: name: root contents: source: data:,nvme-fabrics%0Anvme-tcp
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false mode: 420 user: name: root group: name: root contents: source: data:,nvme-fabrics%0Anvme-tcp
Copy to Clipboard Copied! - Save the file.
Apply the
MachineConfig
CR file.oc apply -f <machine_config_file> -n openstack
$ oc apply -f <machine_config_file> -n openstack
Copy to Clipboard Copied! -
Replace
<machine_config_file>
with the name of yourMachineConfig
CR file.
-
Replace
After the nodes have rebooted, verify the
nvme-fabrics
module are loaded and support ANA on a host:cat /sys/module/nvme_core/parameters/multipath
cat /sys/module/nvme_core/parameters/multipath
Copy to Clipboard Copied! NoteEven though ANA does not use the Linux Multipathing Device Mapper,
multipathd
must be running for the Compute nodes to be able to use multipathing when connecting volumes to instances.
4.4. Configuring multipathing
You can configure multipathing in Red Hat OpenStack Services on OpenShift (RHOSO) to create redundancy or to improve performance.
You must configure multipathing on control plane nodes, by creating a
MachineConfig
CR.NoteIn RHOSO deployments, the
use_multipath_for_image_xfer
configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.- Multipathing on data plane nodes is configured by default in RHOSO, which configures the default multipath parameters. You must add and configure any vendor-specific multipath parameters that your production environment requires.
4.4.1. Configuring multipathing on control plane nodes
Configuring multipathing on Red Hat OpenShift Container Platform (RHOCP) control plane nodes requires a MachineConfig
custom resource (CR) that creates a multipath configuration file and starts the service.
In Red Hat OpenStack Services on OpenShift (RHOSO) deployments, the use_multipath_for_image_xfer
configuration option is enabled by default, which affects the control plane only and not the data plane. This setting enables the Block Storage service (cinder) to use multipath, when it is available, for attaching volumes when creating volumes from images and during Block Storage backup and restore procedures.
The example provided in this procedure implements a minimal multipath configuration file, which configures the default multipath parameters. However, your production deployment might also require vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment.
Procedure
Create a
MachineConfig
CR to create a multipath configuration file and to start themultipathd
module on all control plane nodes.The following example creates a
MachineConfig
CR named99-worker-cinder-enable-multipathd
that implements a multipath configuration file namedmultipath.conf
:ImportantWhen adding vendor-specific multipath parameters to the
contents:
of this file, ensure that you do not change the specified values of the following default multipath parameters:user_friendly_names
,recheck_wwid
,skip_kpartx
, andfind_multipaths
.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false mode: 384 user: name: root group: name: root contents: source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false mode: 384 user: name: root group: name: root contents: source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service
Copy to Clipboard Copied! NoteThe
contents:
data above, represents the following literalmultipath.conf
file contents:defaults { user_friendly_names no recheck_wwid yes skip_kpartx yes find_multipaths yes } blacklist { }
defaults { user_friendly_names no recheck_wwid yes skip_kpartx yes find_multipaths yes } blacklist { }
Copy to Clipboard Copied! -
Save the
MachineConfig
CR file, for example,99-worker-cinder-enable-multipathd.yaml
. Apply the
MachineConfig
CR file.oc apply -f 99-worker-cinder-enable-multipathd.yaml -n openstack
$ oc apply -f 99-worker-cinder-enable-multipathd.yaml -n openstack
Copy to Clipboard Copied!
4.4.2. Configuring custom multipath parameters on data plane nodes
Default multipath parameters are configured on all data plane nodes. You must add and configure any vendor-specific multipath parameters. In this case, you must consult with the appropriate systems administrators to obtain the values required for your deployment, to create your custom multipath configuration file.
Ensure that you do not add the following default multipath parameters and overwrite their values: user_friendly_names
, recheck_wwid
, skip_kpartx
, and find_multipaths
.
You must modify the relevant OpenStackDataPlaneNodeSet
custom resource (CR), to update the data plane node configuration to include your vendor-specific multipath parameters. You create an OpenStackDataPlaneDeployment
CR that deploys and applies the modified OpenStackDataPlaneNodeSet
CR to the data plane.
Prerequisites
- You have created your custom multipath configuration file that contains only the vendor-specific multipath parameters and your deployment-specific values.
Procedure
Create a secret to store your custom multipath configuration file:
oc create secret generic <secret_name> \ --from-file=<configuration_file_name>
$ oc create secret generic <secret_name> \ --from-file=<configuration_file_name>
Copy to Clipboard Copied! -
Replace
<secret_name>
with the name that you want to assign to the secret, for example,custom-multipath-file
. -
Replace
<configuration_file_name>
with the name of the custom multipath configuration file that you created, for example,custom_multipath.conf
.
-
Replace
-
Open the
OpenStackDataPlaneNodeSet
CR file for the node set that you want to update, for example,openstack_data_plane.yaml
. Add an
extraMounts
attribute to theOpenStackDataPlaneNodeSet
CR file to include your vendor-specific multipath parameters:spec: ... nodeTemplate: ... extraMounts: - extraVolType: <optional_volume_type_description> volumes: - name: <mounted_volume_name> secret: secretName: <secret_name> mounts: - name: <mounted_volume_name> mountPath: "/runner/multipath" readOnly: true
spec: ... nodeTemplate: ... extraMounts: - extraVolType: <optional_volume_type_description> volumes: - name: <mounted_volume_name> secret: secretName: <secret_name> mounts: - name: <mounted_volume_name> mountPath: "/runner/multipath" readOnly: true
Copy to Clipboard Copied! -
Optional: Replace
<optional_volume_type_description>
with a description of the type of the mounted volume, for example,multipath-config-file
. Replace
<mounted_volume_name>
with the name of the mounted volume, for example,custom-multipath
.NoteDo not change the value of the
mountPath:
parameter from"/runner/multipath"
.
-
Optional: Replace
-
Save the
OpenStackDataPlaneNodeSet
CR file. Apply the updated
OpenStackDataPlaneNodeSet
CR configuration:oc apply -f openstack_data_plane.yaml
$ oc apply -f openstack_data_plane.yaml
Copy to Clipboard Copied! Verify that the data plane resource has been updated by confirming that the status is
SetupReady
:oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
$ oc wait openstackdataplanenodeset openstack-data-plane --for condition=SetupReady --timeout=10m
Copy to Clipboard Copied! When the status is
SetupReady
, the command returns acondition met
message, otherwise it returns a timeout error.For information about the data plane conditions and states, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
Create a file on your workstation to define the
OpenStackDataPlaneDeployment
CR:apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>
apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <node_set_deployment_name>
Copy to Clipboard Copied! -
Replace
<node_set_deployment_name>
with the name of theOpenStackDataPlaneDeployment
CR. The name must be unique, must consist of lower case alphanumeric characters,-
(hyphen) or.
(period), and must start and end with an alphanumeric character, for example,openstack_data_plane_deploy
.
-
Replace
Add the
OpenStackDataPlaneNodeSet
CR that you modified:spec: nodeSets: - <nodeSet_name>
spec: nodeSets: - <nodeSet_name>
Copy to Clipboard Copied! -
Save the
OpenStackDataPlaneDeployment
CR deployment file, for example,openstack_data_plane_deploy.yaml
. Deploy the modified
OpenStackDataPlaneNodeSet
CR:oc create -f openstack_data_plane_deploy.yaml -n openstack
$ oc create -f openstack_data_plane_deploy.yaml -n openstack
Copy to Clipboard Copied! You can view the Ansible logs while the deployment executes:
oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10
$ oc get pod -l app=openstackansibleee -w $ oc logs -l app=openstackansibleee -f --max-log-requests 10
Copy to Clipboard Copied! If the
oc logs
command returns an error similar to the following error, increase the--max-log-requests
value:error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
error: you are attempting to follow 19 log streams, but maximum allowed concurrency is 10, use --max-log-requests to increase the limit
Copy to Clipboard Copied!
Verification
Verify that the modified
OpenStackDataPlaneNodeSet
CR is deployed:oc get openstackdataplanedeployment -n openstack oc get openstackdataplanenodeset -n openstack
$ oc get openstackdataplanedeployment -n openstack NAME STATUS MESSAGE openstack-data-plane True Setup Complete $ oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready
Copy to Clipboard Copied! For information about the meaning of the returned status, see Data plane conditions and states in Deploying Red Hat OpenStack Services on OpenShift.
If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information about troubleshooting the deployment, see Troubleshooting the data plane creation and deployment in Deploying Red Hat OpenStack Services on OpenShift.
4.5. Configuring initial defaults
The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig
section. Once deployed, these initial defaults are modified using the openstack
client.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the Block Storage service global configuration.
The following example demonstrates a Block Storage service initial configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: customServiceConfig: | [DEFAULT] quota_volumes = 20 quota_snapshots = 15
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: customServiceConfig: | [DEFAULT] quota_volumes = 20 quota_snapshots = 15
Copy to Clipboard Copied! For a complete list of all initial default parameters, see Initial default parameters.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
4.5.1. Initial default parameters
These initial default parameters should be configured when the service is first enabled.
Parameter | Description |
---|---|
|
Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is |
|
Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is |
|
Provides the maximum size of each volume in gigabytes. The default is |
|
Provides the number of volumes allowed for each project. The default value is |
|
Provides the number of snapshots allowed for each project. The default value is |
|
Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is |
|
Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the |
|
Provides the number backups allowed for each project. The default value is |
|
Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is |
4.6. Configuring the API service
The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other RHOSO services.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer.
The following example demonstrates a load balancer configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer
Copy to Clipboard Copied! Edit the CR file and add the configuration for the number of API service replicas. Run the
cinderAPI
service in an Active-Active configuration with three replicas.The following example demonstrates configuring the
cinderAPI
service to use three replicas:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: replicas: 3
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: replicas: 3
Copy to Clipboard Copied! Edit the CR file and configure
cinderAPI
options. These options are configured in thecustomServiceConfig
section under thecinderAPI
section.The following example demonstrates configuring
cinderAPI
service options and enabling debugging on all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderAPI: customServiceConfig: | [DEFAULT] osapi_volume_workers = 3
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderAPI: customServiceConfig: | [DEFAULT] osapi_volume_workers = 3
Copy to Clipboard Copied! For a listing of commonly used
cinderAPI
service option parameters, see API service option parameters.- Save the file.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
4.6.1. API service option parameters
API service option parameters are provided for the configuration of the cinderAPI
portions of the Block Storage service.
Parameter | Description |
---|---|
|
Provides a value to determine if the API rate limit is enabled. The default is |
|
Provides a value to determine whether the logging level is set to |
|
Provides a value for the maximum number of items a collection resource returns in a single response. The default is |
| Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available. |
4.7. Configuring the scheduler service
The Block Storage service (cinder) has a scheduler service (cinderScheduler
) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations.
Use only a single instance of cinderScheduler
for scheduling consistency and ease of troubleshooting. While cinderScheduler
can be run with multiple instances, the service default replicas: 1
is the best practice.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the service down detection timeouts.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] report_interval = 20 service_down_time = 120
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] report_interval = 20
1 service_down_time = 120
2 Copy to Clipboard Copied! NoteConfigure these values at the
cinder
level of the CR instead of thecinderScheduler
so that these values are applied to all components consistently.Edit the CR file and add the configuration for the statistics reporting interval.
The following example demonstrates configuring these values at the
cinder
level to apply them globally to all services:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120 backup_driver_stats_polling_interval = 120
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120
1 backup_driver_stats_polling_interval = 120
2 Copy to Clipboard Copied! The following example demonstrates configuring these values at the
cinderVolume
andcinderBackup
level to customize settings at the service level.apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderBackup: customServiceConfig: | [DEFAULT] backup_driver_stats_polling_interval = 120 < rest of the config > cinderVolumes: nfs: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderBackup: customServiceConfig: | [DEFAULT] backup_driver_stats_polling_interval = 120
1 < rest of the config > cinderVolumes: nfs: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120
2 Copy to Clipboard Copied! NoteThe generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends.
Perform any additional configuration necessary to customize the
cinderScheduler
service.For more configuration options for the customization of the
cinderScheduler
service, see Scheduler service parameters.- Save the file.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
4.7.1. Scheduler service parameters
Scheduler service parameters are provided for the configuration of the cinderScheduler
portions of the Block Storage service
Parameter | Description |
---|---|
|
Provides a setting for the logging level. When this parameter is |
|
Provides a setting for the maximum number of attempts to schedule a volume. The default is |
|
Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is |
|
Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is |
|
Provides a setting for a handler to use for selecting the host or pool after weighing. The value |
The following is an explanation of the filter class names from the parameter table:
AvailabilityZoneFilter
- Filters out all back ends that do not meet the availability zone requirements of the requested volume.
CapacityFilter
- Selects only back ends with enough space to accommodate the volume.
CapabilitiesFilter
- Selects only back ends that can support any specified settings in the volume.
InstanceLocality
- Configures clusters to use volumes local to the same node.
4.8. Configuring the volume service
The Block Storage service (cinder) has a volume service (cinderVolumes
section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots.
This service requires access to the storage back end (storage
) and storage management (storageMgmt
) networks in the networkAttachments
of the OpenStackControlPlane
CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access.
Volume service configuration is performed in the cinderVolumes
section with parameters set in the customServiceConfig
, customServiceConfigSecrets
, networkAttachments
, replicas
, and the nodeSelector
sections.
The volume service cannot have multiple replicas.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for your back end.
The following example demonstrates the service configuration for a Red Hat Ceph Storage back end:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderVolumes: ceph: networkAttachments: - storage customServiceConfig: | [ceph] volume_backend_name = ceph volume_driver = cinder.volume.drivers.rbd.RBDDriver
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderVolumes: ceph:
1 networkAttachments:
2 - storage customServiceConfig: | [ceph] volume_backend_name = ceph
3 volume_driver = cinder.volume.drivers.rbd.RBDDriver
4 Copy to Clipboard Copied! - 1
- The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends.
- 2
- The configuration area for the back end network connections.
- 3
- The name assigned to this back end.
- 4
- The driver used to connect to this back end.
For a list of commonly used volume service parameters, see Volume service parameters.
- Save the file.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
4.8.1. Volume service parameters
Volume service parameters are provided for the configuration of the cinderVolumes
portions of the Block Storage service
Parameter | Description |
---|---|
|
Provides a setting for the availability zone of the back end. This is set in the |
| Provides a setting for the back end name for a given driver implementation. There is no default value. |
| Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value. |
|
Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a |
|
Provides a setting for a directory used for temporary storage during image conversion. The default value is |
|
Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is |
4.8.2. Block Storage service (cinder) back ends
Each Block Storage service back end should have an individual configuration section in the cinderVolumes
section. This ensures each back end runs in a dedicated pod. This approach has the following benefits:
- Increased isolation.
- Adding and removing back ends is fast and does not affect other running back ends.
- Configuration changes do not affect other running back ends.
- Automatically spreads the Volume pods into different nodes.
Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols. Storage protocol information should also be provided in individual vendor installation guides.
Configure each back end with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the best practice.
No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment.
All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog.
For more information on integrating and certifying vendor drivers, see Integrating partner content.
For information on Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a Hyperconverged Infrastructure environment.
For information on configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end.
Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver.
4.8.3. Multiple Block Storage service (cinder) back ends
Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes
configuration section. Each back end runs in an independent pod.
The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
cinderVolumes:
nfs:
networkAttachments:
- storage
customServiceConfigSecrets:
- cinder-volume-nfs-secrets
customServiceConfig: |
[nfs]
volume_backend_name=nfs
iSCSI:
networkAttachments:
- storage
- storageMgmt
customServiceConfig: |
[iscsi]
volume_backend_name=iscsi
4.9. Configuring back end availability zones
Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling.
For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware.
Post-deployment, AZs are created using the RESKEY:availability_zones
volume type extra specification.
Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the AZ configuration.
The following example demonstrates an AZ configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage - storageMgmt customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs backend_availability_zone=zone1 iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi backend_availability_zone=zone2
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage - storageMgmt customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs backend_availability_zone=zone1
1 iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi backend_availability_zone=zone2
Copy to Clipboard Copied! - 1
- The availability zone associated with the back end.
- Save the file.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
4.10. Configuring a generic NFS back end
The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups.
The Block Storage service supports a generic NFS solution with the following caveats:
- Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog.
- For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS.
RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the
customServiceConfig
in the specific back-end configuration with the following parameters:nas_secure_file_operation=false nas_secure_file_permissions=false
nas_secure_file_operation=false nas_secure_file_permissions=false
Copy to Clipboard Copied! -
Do not configure the
nfs_mount_options
option. The default value is the best NFS options for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support.
Procedure
Create a
Secret
CR to store the volume connection information.The following is an example of a
Secret
CR:apiVersion: v1 kind: Secret metadata: name: cinder-volume-nfs-secrets type: Opaque stringData: cinder-volume-nfs-secrets: | [nfs] nas_host=192.168.130.1 nas_share_path=/var/nfs/cinder
apiVersion: v1 kind: Secret metadata: name: cinder-volume-nfs-secrets
1 type: Opaque stringData: cinder-volume-nfs-secrets: | [nfs] nas_host=192.168.130.1 nas_share_path=/var/nfs/cinder
Copy to Clipboard Copied! - 1
- The name used when including it in the
cinderVolumes
back end configuration.
- Save the file.
Update the control plane:
oc apply -f <secret_file_name> -n openstack
$ oc apply -f <secret_file_name> -n openstack
Copy to Clipboard Copied! -
Replace <secret_file_name> with the name of the file that contains your
Secret
CR.
-
Replace <secret_file_name> with the name of the file that contains your
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the generic NFS back end.
The following example demonstrates this configuration:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage customServiceConfig: | [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false customServiceConfigSecrets: - cinder-volume-nfs-secrets
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments:
1 - storage customServiceConfig: | [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false customServiceConfigSecrets: - cinder-volume-nfs-secrets
2 Copy to Clipboard Copied! NoteIf you are configuring multiple generic NFS back ends, ensure each is in an individual configuration section so that one pod is devoted to each back end.
- Save the file.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
4.11. Configuring an NFS conversion directory
When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled.
Procedure
-
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. Edit the CR file and add the configuration for the directory for converting large Image service (glance) images.
The following example demonstrates how to configure this conversion directory:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: extraVol: - propagation: - CinderVolume volumes: - name: cinder-conversion nfs: path: <nfs_share_path> server: <nfs_server> mounts: - name: cinder-conversion mountPath: /var/lib/cinder/conversion readOnly: true
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: extraVol: - propagation: - CinderVolume volumes: - name: cinder-conversion nfs: path: <nfs_share_path> server: <nfs_server> mounts: - name: cinder-conversion mountPath: /var/lib/cinder/conversion readOnly: true
Copy to Clipboard Copied! Replace
<nfs_share_path>
with the path to the conversion directory.NoteThe Block Storage volume service (cinder-volume) runs as the
cinder
user. Thecinder
user requires write permission for<nfs_share_path>
. You can configure this by running the following command on the NFS server:$ chown 42407:42407 <nfs_share_path>
.-
Replace
<nfs_server>
with the IP address of the NFS server that hosts the conversion directory.
NoteThis example demonstrates how to create a common conversion directory that all the volume service pods use.
You can also define a conversion directory for each volume service pod:
-
Define each conversion directory by using an
extraMounts
section, as demonstrated above, in thecinder
section of theOpenStackControlPlane
CR file. -
Set the
propagation
value to the name of the specific Volume section instead ofCinderVolume
.
- Save the file.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
4.12. Configuring automatic database cleanup
The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources.
These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs.
Procedure
-
Open your
openstack_control_plane.yaml
file to edit theOpenStackControlPlane
CR. Add the
dbPurge
parameter to thecinder
template to configure database cleanup depending on the service you want to configure.The following is an example of using the
dbPurge
parameter to configure the Block Storage service:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: dbPurge: age: 20 schedule: 1 0 * * 0
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: dbPurge: age: 20
1 schedule: 1 0 * * 0
2 Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml
$ oc apply -f openstack_control_plane.yaml
Copy to Clipboard Copied!
4.13. Preserving jobs
The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs.
If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob
field in your OpenStackControlPlane
CR to stop the automatic removal of jobs and preserve them.
Example:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: preserveJobs: true
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
preserveJobs: true
4.14. Resolving hostname conflicts
Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN.
Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead.
These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0
* For backups: cinder-backup-<replica-number>
If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames
field.
When you set the uniquePodNames
field to true
, a short hash is added to the pod names, which addresses hostname conflicts.
Example:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: uniquePodNames: true
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
uniquePodNames: true
4.15. Using other container images
Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed using a container image for a specific release and version. There are times when a deployment requires a container image other than the one produced for that release and version. The most common reasons for this are:
- Deploying a hotfix.
- Using a certified, vendor-provided container image.
The container images used by the installer are controlled through the OpenStackVersion
CR. An OpenStackVersion
CR is automatically created by the openstack
operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane
CR but after the openstack
operator is installed. This allows for the container image for any service and component to be individually designated.
The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI
, cinderScheduler
, and cinderBackup
pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes
.
The following example demonstrates a OpenStackControlPlane
configuration with two back ends; one called ceph
and one called custom-fc
. The custom-fc
backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: ceph: networkAttachments: - storage < . . . > custom-fc: networkAttachments: - storage
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack
spec:
cinder:
template:
cinderVolumes:
ceph:
networkAttachments:
- storage
< . . . >
custom-fc:
networkAttachments:
- storage
The following example demonstrates what our OpenStackVersion
CR might look like in order to set up the container images properly.
apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderAPIImages: <custom-api-image> cinderBackupImages: <custom-backup-image> cinderSchedulerImages: <custom-scheduler-image> cinderVolumeImages: custom-fc: <vendor-volume-volume-image>
apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
name: openstack
spec:
customContainerImages:
cinderAPIImages: <custom-api-image>
cinderBackupImages: <custom-backup-image>
cinderSchedulerImages: <custom-scheduler-image>
cinderVolumeImages:
custom-fc: <vendor-volume-volume-image>
-
Replace
<custom-api-image>
with the name of the API service image to use. -
Replace
<custom-backup-image>
with the name of the Backup service image to use. -
Replace
<custom-scheduler-image>
with the name of the Scheduler service image to use. -
Replace
<vendor-volume-volume-image>
with the name of the certified, vendor-provided image to use.
The name
attribute in your OpenStackVersion
CR must match the same attribute in your OpenStackControlPlane
CR.
Chapter 5. Configuring the Block Storage backup service
The Block Storage service (cinder) provides an optional backup service that you can deploy in your Red Hat OpenStack Services on OpenShift (RHOSO) environment.
Users can use the Block Storage backup service to create and restore full or incremental backups of their Block Storage volumes.
A volume backup is a persistent copy of the contents of a Block Storage volume that is saved to a backup repository.
You can configure the backup service under the cinderBackup
section of the glance
template in your OpenStackControlPlane
CR.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges. - You have enabled the backup service for the Block Storage service in your OpenStack Control Plane.
5.1. Storage back ends for backups
You can use the following storage back ends for Block Storage backups:
- Red Hat Ceph Storage RBD is the default back end when you use Red Hat Ceph Storage. For more information, see Configuring the control plane to use the Red Hat Ceph Storage cluster.
- Object Storage service (swift)
- NFS
- S3
For information about other back end options for backups, see OSP18 Cinder Alternative Storage.
You can use the backup service to back up volumes that are on any back end that the Block Storage service (cinder) supports, regardless of which back end you choose to use for backups. You can only configure one back end for backups, whereas you can configure multiple back ends for volumes.
Back ends for backups do not have transport protocol requirements for the RHOCP node. However, the backup pods need to connect to the volumes, and the back ends for volumes have transport protocol requirements.
5.2. Setting the number of replicas for backups
You can run multiple instances of the Block Storage backup component in active-active mode by setting replicas
to a value greater than 1
. The default value is 0
.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to thecinder
template to set the number of replicas for thecinderBackup
parameter:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: … cinder: template: cinderBackup: | replicas: <number_of_replicas> ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: … cinder: template: cinderBackup: | replicas: <number_of_replicas> ...
Copy to Clipboard Copied! -
Replace
<number_of_replicas>
with a value greater than1
.
-
Replace
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
5.3. Backup performance considerations
Some features of the Block Storage backup service like incremental backups, the creation of backups from snapshots, and data compression can reduce the performance of backup operations.
By only capturing the periodic changes to volumes, incremental backup operations can minimize resource usage. However, incremental backup operations have a lower performance than full backup operations. When you create an incremental backup, all of the data in the volume must first be read and compared with the data in both the full backup and each subsequent incremental backup.
Some back ends for volumes support the creation of a backup from a snapshot by directly attaching the snapshot to the backup host, which is faster than cloning the snapshot into a volume. If the back end you use for volumes does not support this feature, you can create a volume from a snapshot and use the volume as backup. However, the extra step of creating the volume from a snapshot can affect the performance of the backup operation.
You can configure the Block Storage backup service to enable or disable data compression of the storage back end for your backups. If you enable data compression, backup operations require additional CPU power, but they use less network bandwidth and storage space overall.
You cannot use data compression with a Red Hat Ceph Storage back end.
5.4. Setting options for backups
The cinderBackup
parameter inherits the configuration from the top level customServiceConfig
section of the cinder
template in your OpenStackControlPlane
CR. However, the cinderBackup
parameter also has its own customServiceConfig
section.
The following table describes configuration options that apply to all back-end drivers.
Option | Description | Value type | Default value |
---|---|---|---|
|
When set to | Boolean |
|
|
Offload pending backup delete during backup service startup. If set to | Boolean |
|
| Availability zone of the backup service. | String |
|
| Number of processes to launch in the backup pod. Improves performance with concurrent backups. | Integer |
|
| Maximum number of concurrent memory, and possibly CPU, heavy operations (backup and restore) that can be executed on each pod. The number limits all workers within a pod but not across pods. Value of 0 means unlimited. | Integer |
|
| Size of the native threads pool used for backup data-related operations. Most backup drivers rely heavily on this option, and you can increase the value for specific drivers that do not rely on it. | Integer |
|
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to thecinder
template to set configuration options. In this example, you enable debug logs, double the number of processes, and increase the maximum number of operations per pod to 20.Example:
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: … cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderBackup: customServiceConfig: | [DEFAULT] backup_workers = 2 backup_max_operations = 20 ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: … cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderBackup: customServiceConfig: | [DEFAULT] backup_workers = 2 backup_max_operations = 20 ...
Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
5.5. Enabling data compression
Backups are compressed by default with the zlib
compression algorithm.
Data compression requires additional CPU power but uses less network bandwidth and storage space.
You can change the data compression algorithm of your backups or disable data compression by using the backup_compression_algorithm
parameter in your OpenStackControlPlane
CR.
The following options are available for data compression.
Option | Description |
| Do not use compression. |
| Use the Deflate compression algorithm. |
| Use Burrows-Wheeler transform compression. |
| Use the Zstandard compression algorithm. |
You cannot specify the data compression algorithm for the Red Hat Ceph Storage back end driver.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameter to thecinder
template to enable data compression. In this example, you enable data compression with an Object Storage service (swift) back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver backup_compression_algorithm = zstd networkAttachments: - storage
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver backup_compression_algorithm = zstd networkAttachments: - storage
Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
5.6. Configuring a Ceph RBD back end for Block Storage backups
You can configure the Block Storage service (cinder) backup service with Red Hat Ceph Storage RADOS Block Device (RBD) as the storage back end.
If you use Ceph RBD as the back end for backups together with Ceph RBD volumes, the performance for incremental backups is efficient.
For more information about Ceph RBD, see Configuring the control plane to use the Red Hat Ceph Storage cluster.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to thecinder
template to configure Ceph RBD as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.CephBackupDriver backup_ceph_pool = backups backup_ceph_user = openstack networkAttachments: - storage replicas: 1
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.CephBackupDriver backup_ceph_pool = backups backup_ceph_user = openstack networkAttachments: - storage replicas: 1
Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
5.7. Configuring an Object Storage service (swift) back end for backups
You can configure the Block Storage service (cinder) backup service with the Object Storage service (swift) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
- Verify that the Object Storage service is active in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment.
The default container for Object Storage service back ends is volumebackups
. You can change the default container by using the backup_swift_container
configuration option.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to thecinder
template to configure the Object Storage service as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver networkAttachments: - storage replicas: 1
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.SwiftBackupDriver networkAttachments: - storage replicas: 1
Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
5.8. Configuring an NFS back end for backups
You can configure the Block Storage service (cinder) backup service with NFS as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Create a
secret
CR file, for example,cinder-backup-nfs-secrets.yaml
, and add the following configuration for your NFS share:apiVersion: v1 kind: Secret metadata: labels: service: cinder component: cinder-backup name: cinder-backup-nfs-secrets type: Opaque stringData: nfs-secrets.conf: | [DEFAULT] backup_share = <192.168.1.2:/Backups> backup_mount_options = <optional>
apiVersion: v1 kind: Secret metadata: labels: service: cinder component: cinder-backup name: cinder-backup-nfs-secrets type: Opaque stringData: nfs-secrets.conf: | [DEFAULT] backup_share = <192.168.1.2:/Backups> backup_mount_options = <optional>
Copy to Clipboard Copied! -
Replace
<192.168.1.2:/Backups>
with the IP address of your NFS share. -
Replace
<optional>
with the mount options for your NFS share.
-
Replace
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to thecinder
template to add thesecret
for the NFS share and configure NFS as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.NFSBackupDriver customServiceConfigSecrets: - cinder-backup-nfs-secrets networkAttachments: - storage replicas: 1
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.nfs.NFSBackupDriver customServiceConfigSecrets: - cinder-backup-nfs-secrets networkAttachments: - storage replicas: 1
Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
5.9. Configuring an S3 back end for backups
You can configure the Block Storage service (cinder) backup service with S3 as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to thecinder
template to configure S3 as the back end for backups:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.s3.S3BackupDriver backup_s3_endpoint_url = <user supplied> backup_s3_store_access_key = <user supplied> backup_s3_store_secret_key = <user supplied> backup_s3_store_bucket = volumebackups backup_s3_ca_cert_file = /etc/pki/tls/certs/ca-bundle.crt networkAttachments: - storage
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: cinder: template: cinderBackup customServiceConfig: | [DEFAULT] backup_driver = cinder.backup.drivers.s3.S3BackupDriver backup_s3_endpoint_url = <user supplied> backup_s3_store_access_key = <user supplied> backup_s3_store_secret_key = <user supplied> backup_s3_store_bucket = volumebackups backup_s3_ca_cert_file = /etc/pki/tls/certs/ca-bundle.crt networkAttachments: - storage
Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
5.10. Block Storage volume backup metadata
When you create a backup of a Block Storage volume, the metadata for this backup is stored in the Block Storage service database. The Block Storage backup service uses this metadata when it restores the volume from the backup.
To ensure that a backup survives a catastrophic loss of the Block Storage service database, you can manually export and store the metadata of this backup. After a catastrophic database loss, you need to create a new Block Storage database and then manually re-import this backup metadata into it.
Chapter 6. Configuring the Image service (glance)
The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or store a snapshot of a server image. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services.
You can configure the following back ends (stores) for the Image service:
- RADOS Block Device (RBD) is the default back end when you use Red Hat Ceph Storage. For more information, see Configuring the control plane to use the Red Hat Ceph Storage cluster.
- Block Storage (cinder).
- Object Storage (swift).
- S3.
- NFS.
- RBD multistore. You can use multiple stores with distributed edge architecture so that you can have an image pool at every edge site.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
6.1. Configuring a Block Storage back end for the Image service
You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
-
Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API (
glanceAPI
) is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure the Block Storage service as the back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true ...
Copy to Clipboard Copied! -
Set
replicas
to3
for high availability across APIs.
-
Set
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
Additional resources
6.1.1. Enabling the creation of multiple instances or volumes from a volume-backed image
When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user.
When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type.
By default, only the Block Storage project administrator can create volume types.
Procedure
Access the remote shell for the OpenStackClient pod from your workstation:
oc rsh -n openstack openstackclient
$ oc rsh -n openstack openstackclient
Copy to Clipboard Copied! Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows:
openstack volume type create glance-multiattach openstack volume type set --property multiattach="<is> True" glance-multiattach
$ openstack volume type create glance-multiattach $ openstack volume type set --property multiattach="<is> True" glance-multiattach
Copy to Clipboard Copied! If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the
volume_backend_name
property to this volume type. You might need to ask your Block Storage administrator for the correctvolume_backend_name
for your multi-attach volume type. For this example, we are usingiscsi
as the back-end name.openstack volume type set glance-multiattach --property volume_backend_name=iscsi
$ openstack volume type set glance-multiattach --property volume_backend_name=iscsi
Copy to Clipboard Copied! Exit the
openstackclient
pod:exit
$ exit
Copy to Clipboard Copied! Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
. In theglance
template, add the following parameter to the end of thecustomServiceConfig
,[default_backend]
section to configure the Image service to use the Block Storage multi-attach volume type:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: ... customServiceConfig: | ... [default_backend] ... cinder_volume_type = glance-multiattach ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: ... customServiceConfig: | ... [default_backend] ... cinder_volume_type = glance-multiattach ...
Copy to Clipboard Copied! Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
Additional resources
6.1.2. Parameters for configuring the Block Storage back end
You can add the following parameters to the end of the customServiceConfig
, [default_backend]
section of the glance
template in your OpenStackControlPlane
CR file.
Parameter = Default value | Type | Description of use |
---|---|---|
| boolean value |
Set to |
| boolean value |
Set to |
| string value | Specify a string representing the absolute path of the mount point, the directory where the Image service mounts the NFS share. Note This parameter is only applicable when using an NFS Block Storage back end for the Image service. |
| boolean value |
Set to
The Block Storage service creates an initial 1 GB volume and extends the volume size in 1 GB increments until it contains the data of the entire image. When this parameter is either not added or set to Note This parameter requires your Block Storage back end to support the extension of attached (in-use) volumes. See your back-end driver documentation for information on which features are supported. |
| string value | Specify the name of the Block Storage volume type that can be optimized for creating volumes for images. For example, you can create a volume type that enables the creation of multiple instances or volumes from a volume-backed image. For more information, see Creating a multi-attach volume type. When this parameter is not used, volumes are created by using the default Block Storage volume type. |
6.2. Configuring an Object Storage back end
You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure the Object Storage service as the back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...
Copy to Clipboard Copied! -
Set
replicas
to3
for high availability across APIs.
-
Set
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.3. Configuring an S3 back end
To configure the Image service (glance) with S3 as the storage back end, you require the following details:
- S3 access key
- S3 secret key
- S3 endpoint
For security, these details are stored in a Kubernetes secret.
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
-
Create a configuration file, for example,
glance-s3.conf
, where you can store the S3 configuration details. Generate the secret and access keys for your S3 storage.
If your S3 storage is provisioned by the Ceph Object Gateway (RGW), run the following command to generate the secret and access keys:
radosgw-admin user create --uid="<user_1>" \ --display-name="<Jane Doe>"
$ radosgw-admin user create --uid="<user_1>" \ --display-name="<Jane Doe>"
Copy to Clipboard Copied! -
Replace
<user_1>
with the user ID. -
Replace
<Jane Doe>
with a display name for the user.
-
Replace
If your S3 storage is provisioned by the Object Storage service (swift), run the following command to generate the secret and access keys:
openstackclient openstack credential create --type ec2 \ --project admin admin \ '{"access": "<access_key>", "secret": "<secret_key>"}'
$ openstackclient openstack credential create --type ec2 \ --project admin admin \ '{"access": "<access_key>", "secret": "<secret_key>"}'
Copy to Clipboard Copied! -
Replace
<access_key>
with the user ID. -
Replace
<secret_key
> with a display name for the user.
-
Replace
Add the S3 configuration details to your
glance-s3.conf
configuration file:[default_backend] s3_store_host = <_s3_endpoint_> s3_store_access_key = <_s3_access_key_> s3_store_secret_key = <_s3_secret_key_> s3_store_bucket = <_s3_bucket_>
[default_backend] s3_store_host = <_s3_endpoint_> s3_store_access_key = <_s3_access_key_> s3_store_secret_key = <_s3_secret_key_> s3_store_bucket = <_s3_bucket_>
Copy to Clipboard Copied! -
Replace
<_s3_endpoint_>
with the host where the S3 server is listening. This option can contain a DNS name, for example,_s3.amazonaws.com_
, or an IP address. -
Replace
<_s3_access_key_>
and<_s3_secret_key_>
with the data generated by the S3 back end. -
Replace
<_s3_bucket_>
with the bucket name where you want to store images in the S3 back end. If you sets3_store_create_bucket_on_put
toTrue
in yourOpenStackControlPlane
CR file, the bucket name is created automatically, even if the bucket does not already exist.
-
Replace
Create a secret from the
glance-s3.conf
file:oc create secret generic glances3 \ --from-file s3glance.conf
$ oc create secret generic glances3 \ --from-file s3glance.conf
Copy to Clipboard Copied! Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theglance
template to configure S3 as the back end:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:s3 [glance_store] default_backend = default_backend [default_backend] s3_store_create_bucket_on_put = True s3_store_bucket_url_format = "path" s3_store_cacert = "/etc/pki/tls/certs/ca-bundle.crt glanceAPIs: default: customServiceConfigSecrets: - glances3 ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: ... glance: template: ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:s3 [glance_store] default_backend = default_backend [default_backend] s3_store_create_bucket_on_put = True s3_store_bucket_url_format = "path" s3_store_cacert = "/etc/pki/tls/certs/ca-bundle.crt
1 glanceAPIs: default: customServiceConfigSecrets: - glances3 ...
Copy to Clipboard Copied! - 1
- Optional: If your S3 storage is accessed by HTTPS, you must set the
s3_store_cacert
field and point it to theca-bundle.crt
path. The OpenStack control plane is deployed by default with TLS enabled, and a CA certificate is mounted to the pod in/etc/pki/tls/certs/ca-bundle.crt
.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.4. Configuring an NFS back end
You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share.
If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk:
- Use a reliable production-grade NFS back end.
-
Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a
NetworkAttachmentDefinition
custom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server. - Set export permissions. Write permissions must be present in the shared file system that you use as a store.
Limitations
In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways:
- Set server-side mount options.
-
Use
/etc/nfsmount.conf
. - Mount NFS volumes by using PersistentVolumes, which have mount options.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add theextraMounts
parameter in thespec
section to add the export path and IP address of the NFS share. The path is mapped to/var/lib/glance/images
, where the Image service API (glanceAPI
) stores and retrieves images:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack ... spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <nfs_export_path> server: <nfs_ip_address> name: r1 region: r1 ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack ... spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <nfs_export_path> server: <nfs_ip_address> name: r1 region: r1 ...
Copy to Clipboard Copied! -
Replace
<nfs_export_path>
with the export path of your NFS share. -
Replace
<nfs_ip_address>
with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service.
-
Replace
Add the following parameters to the
glance
template to configure NFS as the back end:... spec: extraMounts: ... glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack ...
... spec: extraMounts: ... glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack ...
Copy to Clipboard Copied! Set
replicas
to3
for high availability across APIs.NoteWhen you configure an NFS back end, you must set the
type
tosingle
. By default, the Image service has asplit
deployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. Thesplit
deployment type is invalid for afile
back end because different pods access the same file share.
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.5. Configuring multistore for a single Image service API instance
You can configure the Image service (glance) with multiple storage back ends.
To configure multiple back ends for a single Image service API (glanceAPI
) instance, you set the enabled_backends
parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid:
-
file
-
http
-
rbd
-
swift
-
cinder
-
s3
Prerequisites
- You have planned networking for storage to ensure connectivity between the storage back ends, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the parameters to theglance
template to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store:... spec: glance: template: customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift ...
... spec: glance: template: customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift ...
Copy to Clipboard Copied! Specify the back end to use as the default back end. In the following example, the default back end is
ceph-1
:... customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift [glance_store] default_backend = ceph-1 ...
... customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift [glance_store] default_backend = ceph-1 ...
Copy to Clipboard Copied! Add the configuration for each back end type you want to use:
Add the configuration for the first Ceph RBD store,
ceph-0
:... customServiceConfig: | [DEFAULT] ... [ceph-0] rbd_store_ceph_conf = /etc/ceph/ceph-0.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack ...
... customServiceConfig: | [DEFAULT] ... [ceph-0] rbd_store_ceph_conf = /etc/ceph/ceph-0.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack ...
Copy to Clipboard Copied! Add the configuration for the second Ceph RBD store,
ceph-1
:... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] rbd_store_ceph_conf = /etc/ceph/ceph-1.conf store_description = "RBD backend 1" rbd_store_pool = images rbd_store_user = openstack ...
... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] rbd_store_ceph_conf = /etc/ceph/ceph-1.conf store_description = "RBD backend 1" rbd_store_pool = images rbd_store_user = openstack ...
Copy to Clipboard Copied! Add the configuration for the Object Storage service store,
swift-0
:... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] ... [swift-0] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...
... customServiceConfig: | [DEFAULT] ... [ceph-0] ... [ceph-1] ... [swift-0] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL ...
Copy to Clipboard Copied!
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.6. Configuring multiple Image service API instances
You can deploy multiple Image service API (glanceAPI
) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI
instances, they are orchestrated by the same glance-operator
, but you can connect them to a single back end or to different back ends.
Multiple glanceAPI
instances inherit the same configuration from the main customServiceConfig
parameter in your OpenStackControlPlane
CR file. You use the extraMounts
parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters.
You can also deploy multiple glanceAPI
instances in an availability zone (AZ) to serve different workloads in that AZ.
You can only register one glanceAPI
instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint
parameter in your OpenStackControlPlane
CR file.
For information about adding and decommissioning glanceAPIs
, see Performing operations with the Image service (glance).
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add theglanceAPIs
parameter to theglance
template to configure multipleglanceAPI
instances. In the following example, you create threeglanceAPI
instances that are namedapi0
,api1
, andapi2
:... spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseUser: glance keystoneEndpoint: api0 glanceAPIs: api0: replicas: 1 api1: replicas: 1 api2: replicas: 1 ...
... spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = "RBD backend" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseUser: glance keystoneEndpoint: api0 glanceAPIs: api0: replicas: 1 api1: replicas: 1 api2: replicas: 1 ...
Copy to Clipboard Copied! -
api0
is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations. -
api1
andapi2
are not default endpoints, but they are active APIs that users can use for image uploads by specifying the--os-image-url
parameter when they upload an image. -
You can update the
keystoneEndpoint
parameter to change the default endpoint in the Keystone catalog.
-
Add the
extraMounts
parameter to connect the threeglanceAPI
instances to a different back end. In the following example, you connectapi0
,api1
, andapi2
to three different Ceph Storage clusters that are namedceph0
,ceph1
, andceph2
:spec: glance: template: customServiceConfig: | [DEFAULT] ... extraMounts: - name: api0 region: r1 extraVol: - propagation: - api0 volumes: - name: ceph0 secret: secretName: <secret_name> mounts: - name: ceph0 mountPath: "/etc/ceph" readOnly: true - name: api1 region: r1 extraVol: - propagation: - api1 volumes: - name: ceph1 secret: secretName: <secret_name> mounts: - name: ceph1 mountPath: "/etc/ceph" readOnly: true - name: api2 region: r1 extraVol: - propagation: - api2 volumes: - name: ceph2 secret: secretName: <secret_name> mounts: - name: ceph2 mountPath: "/etc/ceph" readOnly: true ...
spec: glance: template: customServiceConfig: | [DEFAULT] ... extraMounts: - name: api0 region: r1 extraVol: - propagation: - api0 volumes: - name: ceph0 secret: secretName: <secret_name> mounts: - name: ceph0 mountPath: "/etc/ceph" readOnly: true - name: api1 region: r1 extraVol: - propagation: - api1 volumes: - name: ceph1 secret: secretName: <secret_name> mounts: - name: ceph1 mountPath: "/etc/ceph" readOnly: true - name: api2 region: r1 extraVol: - propagation: - api2 volumes: - name: ceph2 secret: secretName: <secret_name> mounts: - name: ceph2 mountPath: "/etc/ceph" readOnly: true ...
Copy to Clipboard Copied! -
Replace
<secret_name>
with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specificglanceAPI
, for example,ceph-conf-files-0
for theceph0
cluster.
-
Replace
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
6.7. Split and single Image service API layouts
By default, the Image service (glance) has a split
deployment type:
- An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone)
- An internal API service, which is accessible only through the internal endpoint for the Identity service
The split
deployment type is invalid for an NFS or file
back end because different pods access the same file share. When you configure an NFS or file
back end, you must set the type
to single
in your OpenStackControlPlane
CR.
Split layout example
In the following example of a split
layout type in an edge deployment, two glanceAPI
instances are deployed in an availability zone (AZ) to serve different workloads in that AZ.
... spec: glance: template: customServiceConfig: | [DEFAULT] ... keystoneEndpoint: api0 glanceAPIs: api0: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd replicas: 1 type: split api1: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift replicas: 1 type: split ...
...
spec:
glance:
template:
customServiceConfig: |
[DEFAULT]
...
keystoneEndpoint: api0
glanceAPIs:
api0:
customServiceConfig: |
[DEFAULT]
enabled_backends = default_backend:rbd
replicas: 1
type: split
api1:
customServiceConfig: |
[DEFAULT]
enabled_backends = default_backend:swift
replicas: 1
type: split
...
Single layout example
In the following example of a single
layout type in an NFS back-end configuration, different pods access the same file share:
... spec: extraMounts: ... glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service ... customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack glanceAPIs: ...
...
spec:
extraMounts:
...
glance:
template:
glanceAPIs:
default:
type: single
replicas: 3 # Configure back end; set to 3 when deploying service
...
customServiceConfig: |
[DEFAULT]
enabled_backends = default_backend:file
[glance_store]
default_backend = default_backend
[default_backend]
filesystem_store_datadir = /var/lib/glance/images
databaseInstance: openstack
glanceAPIs:
...
-
Set
replicas
to3
for high availability across APIs.
6.8. Configuring multistore with edge architecture
When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites.
The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores.
With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Red Hat Ceph Storage Block Device Guide.
When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch.
Refer to the following requirements to use images with edge sites:
- A copy of each image must exist in the Image service at the central location.
- You must copy images from an edge site to the central location before you can copy them to other edge sites.
- You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage.
- RBD must be the storage driver for the Image, Compute, and Block Storage services.
For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture.
Chapter 7. Configuring the Object Storage service (swift)
You can configure the Object Storage service (swift) to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.
When you use PVs on OpenShift nodes, this configuration is limited to a single PV per node. The Object Storage service requires multiple PVs. To maximize availability and data durability, you create these PVs on different nodes, and only use one PV per node.
You can use external data plane nodes for more flexibility in larger storage deployments, where you can use multiple disks per node to deploy a larger Object Storage cluster.
For information about configuring the Object Storage service as an endpoint for the Red Hat Ceph Storage Object Gateway (RGW), see Configuring an external Ceph Object Gateway back end.
Prerequisites
-
You have the
oc
command line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-admin
privileges.
7.1. Deploying the Object Storage service on OpenShift nodes by using PersistentVolumes
You use at least two swiftProxy
replicas and three swiftStorage
replicas in a default Object Storage service (swift) deployment. You can increase these values to distribute storage across more nodes and disks.
The ringReplicas
value defines the number of object copies in the cluster. For example, if you set ringReplicas: 3
and swiftStorage/replicas: 5
, every object is stored on 3 different PersistentVolumes (PVs), and there are 5 PVs in total.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and add the following parameters to theswift
template:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: ringReplicas: 3 swiftStorage: replicas: 3 storageClass: <swift-storage> storageRequest: 100Gi ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: ringReplicas: 3 swiftStorage: replicas: 3 storageClass: <swift-storage> storageRequest: 100Gi ...
Copy to Clipboard Copied! -
Increase the
swiftProxy/replicas:
value to distribute proxy instances across more nodes. -
Replace the
ringReplicas:
value to define the number of object copies you want in your cluster. -
Increase the
swiftStorage/replicas:
value to define the number of PVs in your cluster. -
Replace
<swift-storage>
with the name of the storage class you want the Object Storage service to use.
-
Increase the
Update the control plane:
oc apply -f openstack_control_plane.yaml -n openstack
$ oc apply -f openstack_control_plane.yaml -n openstack
Copy to Clipboard Copied! Wait until RHOCP creates the resources related to the
OpenStackControlPlane
CR. Run the following command to check the status:oc get openstackcontrolplane -n openstack
$ oc get openstackcontrolplane -n openstack
Copy to Clipboard Copied! The
OpenStackControlPlane
resources are created when the status is "Setup complete".TipAppend the
-w
option to the end of theget
command to track deployment progress.
7.2. Object Storage rings
The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.
Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.
The Object Storage service has three rings to store the following types of data:
- Account information
- Containers, to facilitate organizing objects under an account
- Object replicas
7.3. Ring partition power
The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.
In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat Support.
When you use separate nodes for the Object Storage service (swift), use a higher partition power value.
The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.
7.4. Increasing ring partition power
You can only change the partition power parameter for new containers and their objects, so you must set this value before initial deployment.
The default partition power value is 10
. Refer to the following table to select an appropriate partition power if you use three replicas:
Partition Power | Maximum number of disks |
10 | ~ 35 |
11 | ~ 75 |
12 | ~ 150 |
13 | ~ 250 |
14 | ~ 500 |
Setting an excessively high partition power value (for example, 14
for only 40 disks) negatively impacts replication times.
Procedure
Open your
OpenStackControlPlane
CR file,openstack_control_plane.yaml
, and change the value forpartPower
under theswiftRing
parameter in theswift
template:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: partPower: 12 ringReplicas: 3 ...
apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: partPower: 12 ringReplicas: 3 ...
Copy to Clipboard Copied! Replace
<12>
with the value you want to set for partition power.TipYou can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.