Chapter 7. Configuring the Object Storage service (swift)
Configure the Object Storage service (swift) to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.
OpenShift deployments are limited to one PV per node. However, the Object Storage service requires multiple PVs. To maximize availability and data durability, you create these PVs on different nodes, and only use one PV per node. External data plane nodes offer more flexibility for larger deployments with multiple disks per node.
The Object Storage service (swift) operator is designed to deploy and manage the OpenStack Object Storage service. To use a third-party external object storage back end, such as Dell ObjectScale, you must:
-
Disable the Object Storage service in the
OpenStackControlPlanecustom resource by settingspec.swift.enabledtofalse. - Manually provision the required Identity service (keystone) resources (service, user, roles, and endpoints) by using the OpenStack CLI.
7.1. Prerequisites Copy linkLink copied to clipboard!
-
You have the
occommand line tool installed on your workstation. -
You are logged on to a workstation that has access to the RHOSO control plane as a user with
cluster-adminprivileges.
7.3. Deploying the Object Storage service on OpenShift nodes by using PersistentVolumes Copy linkLink copied to clipboard!
You use at least two swiftProxy replicas and three swiftStorage replicas in a default Object Storage service (swift) deployment. You can increase these values to distribute storage across more nodes and disks.
The ringReplicas value defines the number of object copies in the cluster. For example, if you set ringReplicas: 3 and swiftStorage/replicas: 5, every object is stored on 3 different PersistentVolumes (PVs), and there are 5 PVs in total.
The PersistentVolumes used for Object Storage must be formatted with the XFS file system. The Object Storage service stores metadata in XFS extended attributes (xattrs), which require XFS for reliable support. Other file systems are not supported.
Use only local disks for Object Storage PersistentVolumes. Network-attached storage adds latency and reduces performance, and is not recommended.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and add the following parameters to theswifttemplate:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: ringReplicas: 3 swiftStorage: replicas: 3 storageClass: <swift-storage> storageRequest: 100Gi ...-
Increase the
swiftProxy/replicas:value to distribute proxy instances across more nodes. -
Replace the
ringReplicas:value to define the number of object copies you want in your cluster. -
Increase the
swiftStorage/replicas:value to define the number of PVs in your cluster. -
Replace
<swift-storage>with the name of the storage class you want the Object Storage service to use.
-
Increase the
Update the control plane:
$ oc apply -f openstack_control_plane.yaml -n openstackWait until RHOCP creates the resources related to the
OpenStackControlPlaneCR. Run the following command to check the status:$ oc get openstackcontrolplane -n openstackThe
OpenStackControlPlaneresources are created when the status is "Setup complete".TipAppend the
-woption to the end of thegetcommand to track deployment progress.
When you increase the number of swiftStorage replicas, the swift-operator automatically rebalances the rings to distribute data across the new PersistentVolumes. Verify that the ring rebalance has completed successfully and all data has been moved to its new location before performing another rebalance. For more information, see Rebalancing Object Storage rings.
7.4. Deploying the Object Storage service on external data plane nodes Copy linkLink copied to clipboard!
If you operate large clusters with a lot of storage in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you can deploy the Object Storage service (swift) on external data plane nodes. With this configuration, the Object Storage proxy service continues to run on on the control plane and the Object Storage services run on the data plane nodes.
If you do not want to use persistent volumes for data storage, set swiftStorage replicas to 0 in the OpenStackControlPlane CR. When initially creating the OpenStackControlPlane CR, you must also set swiftProxy replicas to 0. This is necessary because ring building is not possible until data plane nodes are defined, and the proxies for the Object Storage service require properly built rings with at least the configured number of replica devices to start. Once the data plane is deployed, you can then scale the swiftProxy replicas to the number you want.
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
spec:
...
swift:
enabled: true
template:
swiftProxy:
replicas: 0
swiftRing:
ringReplicas: 3
swiftStorage:
replicas: 0
To deploy and run the Object Storage services on data plane nodes, first you enable DNS forwarding to resolve data plane host names in the control plane pods, and then you create an OpenStackDataPlaneNodeSet CR with the following properties:
-
The
swiftservice - A list of disks to be used for Object Storage service storage
Procedure
Enable DNS forwarding to resolve data plane hostnames in the control plane pods.
Obtain the
clusterIPof the resolver:$ oc get svc dnsmasq-dns -o jsonpath=`{.spec.clusterIP}`Update the default DNS entry to add the
clusterIPof the resolver:apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: swift zones: - storage.example.com forwardPlugin: policy: Random upstreams: - <clusterIP>-
Replace
<clusterIP>with theclusterIPof the resolver.
-
Replace
Enable the
swiftstorage service on the data plane nodes by adding theswiftservice to the end of the list of services for theNodeSetin yourOpenStackDataPlaneNodeSetCR. The service runs the playbooks that are required to configure the Object Storage services:Example:
services: - repo-setup - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - swiftDefine disks to be used by the Object Storage service on data plane nodes.
When you define disks, you can do the following:
-
Define the disks in the global
nodeTemplatesection in yourOpenStackDataPlaneNodeSetCR to use the same type of disks for all nodes. -
Define disks on a per-node basis in the
nodessection of yourOpenStackDataPlaneNodeSetCR. - Assign disks to a specific region or zone.
- Enable ring management to distribute replicas.
-
Define the disks in the global
You must specify a weight for each disk. If you do not have custom weights in your existing rings, you can set the weight to the GiB capacity of the disk.
The following example shows the
OpenStackDataPlaneNodeSetCR for a data plane with three storage nodes. Each node is configured to use two disks in thenodeTemplatesection. The first nodeedpm-swift-0is configured to use a third disk in thenodessection:Example:
- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm-ipam namespace: openstack spec: ... networkAttachments: - ctlplane - storage nodeTemplate: ansible: ansibleVars: edpm_swift_disks: - device: /dev/vdb path: /srv/node/vdb region: 0 weight: 4000 zone: 0 - device: /dev/vdc path: /srv/node/vdc region: 0 weight: 4000 zone: 0 nodes: edpm-swift-0: ansible: ansibleVars: edpm_swift_disks: - device: /dev/vdd path: /srv/node/vdd weight: 1000 hostName: edpm-swift-0 networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-swift-1: hostName: edpm-swift-1 networks: - defaultRoute: true fixedIP: 192.168.122.101 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 edpm-swift-2: hostName: edpm-swift-2 networks: - defaultRoute: true fixedIP: 192.168.122.102 name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ... services: - repo-setup - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - swift
7.5. Object Storage rings Copy linkLink copied to clipboard!
The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.
Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.
The Object Storage service has three rings to store the following types of data:
- Account information
- Containers, to facilitate organizing objects under an account
- Object replicas
7.6. Rebalancing Object Storage rings Copy linkLink copied to clipboard!
When you change the Object Storage service (swift) topology, for example by adding or removing PersistentVolumes, changing swiftStorage replicas, or adding nodes or disks on data plane nodes, the swift-operator rebalances the rings. If you use external data plane nodes, you must push the updated rings to the data plane nodes. After any rebalance, you must verify that all data has been moved to its new location before performing another rebalance.
Successive rebalances must respect the minPartHours interval. A rebalance is only possible when swift-ring-builder shows 0:00:00 remaining.
Prerequisites
You have populated the dispersion tool before the rebalance. You only need to run this command once per cluster:
$ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-dispersion-populate'This creates a set of test objects and containers that are distributed across all partitions in the ring. The dispersion tool uses these to verify that all replicas are in the expected locations. You must run this command before a rebalance so that the test objects are placed according to the current ring layout.
Procedure
Manually trigger a ring rebalance:
ImportantOnly rebalance after any previous rebalance is fully replicated. Rebalancing before replication has completed can cause data to be moved multiple times, increasing the risk of temporary data unavailability.
$ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-ring-tool rebalance'This rebalances all Object Storage rings and updates the ring files in the
swift-ring-filesConfigMap.If you use external data plane nodes, push the updated rings to the data plane nodes. Delete any previous ring update deployment and create a new
OpenStackDataPlaneDeploymentCR that runs only theswiftservice with theswiftringsAnsible tag. This distributes the updated ring files to all data plane nodes without running a full deployment:$ oc delete --ignore-not-found --wait openstackdataplanedeployment swift-ring-update -n openstack $ oc apply -n openstack -f - << EOF apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: swift-ring-update spec: ansibleTags: swiftrings servicesOverride: - swift nodeSets: - <nodeset_name> EOF-
Replace
<nodeset_name>with the name of yourOpenStackDataPlaneNodeSetCR.
-
Replace
Wait for the deployment to complete:
$ oc wait openstackdataplanedeployment swift-ring-update --for condition=Ready --timeout=300s -n openstackVerify cluster health by checking that all rings are consistent and replication is progressing:
$ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-recon -rT --md5'- Verify that all ring md5sums are consistent across nodes.
- Check the replication timestamps. The oldest completion time minus the highest replication time gives you the earliest start time of the last replication pass. This earliest start time must be after the ring ConfigMaps have been updated, confirming that all nodes have completed a full replication pass with the new rings.
Alternatively, you can check the
object-replicatorlogs directly to verify that a full replication pass has completed:$ oc logs --timestamps -l component=swift-storage -c object-replicator --tail 1000 | grep replicatedLook for log entries that show the completion of a replication pass, for example:
$ oc logs --timestamps -c object-replicator swift-storage-0 2026-04-02T10:44:13.863401909Z object-replicator: Object replication complete. (33.00 minutes)The completion timestamp minus the duration gives you the start time of the pass. For example, a completion at
10:44:13with a duration of33.00 minutesmeans the pass started at10:11:13. This start time must be after the ring ConfigMaps were updated. If the pass started before the ConfigMap update, it was still using the old rings and you must wait for the next full pass to complete.Run
swift-dispersion-reportto verify that all replicas have been placed correctly:$ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-dispersion-report'The output shows the percentage of objects and containers with all replicas found. All values must be at 100% before you perform another rebalance.
7.7. Ring partition power Copy linkLink copied to clipboard!
The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.
In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat Support.
When you use separate nodes for the Object Storage service (swift), use a higher partition power value.
The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.
7.8. Increasing Object Storage ring partition power values Copy linkLink copied to clipboard!
You can only change the partition power parameter for new containers and their objects, so you must set this value before initial deployment.
The default partition power value is 10. Refer to the following table to select an appropriate partition power if you use three replicas:
| Partition Power | Maximum number of disks |
| 10 | ~ 35 |
| 11 | ~ 75 |
| 12 | ~ 150 |
| 13 | ~ 250 |
| 14 | ~ 500 |
Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.
Procedure
Open your
OpenStackControlPlaneCR file,openstack_control_plane.yaml, and change the value forpartPowerunder theswiftRingparameter in theswifttemplate:apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane namespace: openstack spec: ... swift: enabled: true template: swiftProxy: replicas: 2 swiftRing: partPower: 12 ringReplicas: 3 ...Replace
<12>with the value you want to set for partition power.TipYou can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.