Chapter 7. Configuring the Object Storage service (swift)


Configure the Object Storage service (swift) to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.

OpenShift deployments are limited to one PV per node. However, the Object Storage service requires multiple PVs. To maximize availability and data durability, you create these PVs on different nodes, and only use one PV per node. External data plane nodes offer more flexibility for larger deployments with multiple disks per node.

Note

The Object Storage service (swift) operator is designed to deploy and manage the OpenStack Object Storage service. To use a third-party external object storage back end, such as Dell ObjectScale, you must:

  • Disable the Object Storage service in the OpenStackControlPlane custom resource by setting spec.swift.enabled to false.
  • Manually provision the required Identity service (keystone) resources (service, user, roles, and endpoints) by using the OpenStack CLI.

7.1. Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

You use at least two swiftProxy replicas and three swiftStorage replicas in a default Object Storage service (swift) deployment. You can increase these values to distribute storage across more nodes and disks.

The ringReplicas value defines the number of object copies in the cluster. For example, if you set ringReplicas: 3 and swiftStorage/replicas: 5, every object is stored on 3 different PersistentVolumes (PVs), and there are 5 PVs in total.

Important

The PersistentVolumes used for Object Storage must be formatted with the XFS file system. The Object Storage service stores metadata in XFS extended attributes (xattrs), which require XFS for reliable support. Other file systems are not supported.

Use only local disks for Object Storage PersistentVolumes. Network-attached storage adds latency and reduces performance, and is not recommended.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            ringReplicas: 3
          swiftStorage:
            replicas: 3
            storageClass: <swift-storage>
            storageRequest: 100Gi
    ...
    • Increase the swiftProxy/replicas: value to distribute proxy instances across more nodes.
    • Replace the ringReplicas: value to define the number of object copies you want in your cluster.
    • Increase the swiftStorage/replicas: value to define the number of PVs in your cluster.
    • Replace <swift-storage> with the name of the storage class you want the Object Storage service to use.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

Important

When you increase the number of swiftStorage replicas, the swift-operator automatically rebalances the rings to distribute data across the new PersistentVolumes. Verify that the ring rebalance has completed successfully and all data has been moved to its new location before performing another rebalance. For more information, see Rebalancing Object Storage rings.

If you operate large clusters with a lot of storage in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you can deploy the Object Storage service (swift) on external data plane nodes. With this configuration, the Object Storage proxy service continues to run on on the control plane and the Object Storage services run on the data plane nodes.

Note

If you do not want to use persistent volumes for data storage, set swiftStorage replicas to 0 in the OpenStackControlPlane CR. When initially creating the OpenStackControlPlane CR, you must also set swiftProxy replicas to 0. This is necessary because ring building is not possible until data plane nodes are defined, and the proxies for the Object Storage service require properly built rings with at least the configured number of replica devices to start. Once the data plane is deployed, you can then scale the swiftProxy replicas to the number you want.

apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
  name: openstack-control-plane
  namespace: openstack
spec:
  ...
  swift:
    enabled: true
    template:
      swiftProxy:
        replicas: 0
      swiftRing:
        ringReplicas: 3
      swiftStorage:
        replicas: 0

To deploy and run the Object Storage services on data plane nodes, first you enable DNS forwarding to resolve data plane host names in the control plane pods, and then you create an OpenStackDataPlaneNodeSet CR with the following properties:

  • The swift service
  • A list of disks to be used for Object Storage service storage

Procedure

  1. Enable DNS forwarding to resolve data plane hostnames in the control plane pods.

    1. Obtain the clusterIP of the resolver:

      $ oc get svc dnsmasq-dns -o jsonpath=`{.spec.clusterIP}`
    2. Update the default DNS entry to add the clusterIP of the resolver:

      apiVersion: operator.openshift.io/v1
      kind: DNS
      metadata:
        name: default
      spec:
        servers:
        - name: swift
          zones:
          - storage.example.com
          forwardPlugin:
            policy: Random
            upstreams:
            - <clusterIP>
      • Replace <clusterIP> with the clusterIP of the resolver.
  2. Enable the swift storage service on the data plane nodes by adding the swift service to the end of the list of services for the NodeSet in your OpenStackDataPlaneNodeSet CR. The service runs the playbooks that are required to configure the Object Storage services:

    Example:

        services:
        - repo-setup
        - bootstrap
        - download-cache
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - reboot-os
        - install-certs
        - swift
  3. Define disks to be used by the Object Storage service on data plane nodes.

    • When you define disks, you can do the following:

      • Define the disks in the global nodeTemplate section in your OpenStackDataPlaneNodeSet CR to use the same type of disks for all nodes.
      • Define disks on a per-node basis in the nodes section of your OpenStackDataPlaneNodeSet CR.
      • Assign disks to a specific region or zone.
      • Enable ring management to distribute replicas.
    • You must specify a weight for each disk. If you do not have custom weights in your existing rings, you can set the weight to the GiB capacity of the disk.

      The following example shows the OpenStackDataPlaneNodeSet CR for a data plane with three storage nodes. Each node is configured to use two disks in the nodeTemplate section. The first node edpm-swift-0 is configured to use a third disk in the nodes section:

      Example:

      - apiVersion: dataplane.openstack.org/v1beta1
        kind: OpenStackDataPlaneNodeSet
        metadata:
          name: openstack-edpm-ipam
          namespace: openstack
        spec:
          ...
          networkAttachments:
          - ctlplane
          - storage
          nodeTemplate:
            ansible:
              ansibleVars:
                edpm_swift_disks:
                - device: /dev/vdb
                  path: /srv/node/vdb
                  region: 0
                  weight: 4000
                  zone: 0
                - device: /dev/vdc
                  path: /srv/node/vdc
                  region: 0
                  weight: 4000
                  zone: 0
          nodes:
            edpm-swift-0:
              ansible:
                ansibleVars:
                  edpm_swift_disks:
                  - device: /dev/vdd
                    path: /srv/node/vdd
                    weight: 1000
              hostName: edpm-swift-0
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.100
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
            edpm-swift-1:
              hostName: edpm-swift-1
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.101
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
            edpm-swift-2:
              hostName: edpm-swift-2
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.102
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
          ...
          services:
          - repo-setup
          - bootstrap
          - download-cache
          - configure-network
          - validate-network
          - install-os
          - configure-os
          - ssh-known-hosts
          - run-os
          - reboot-os
          - install-certs
          - swift

7.5. Object Storage rings

The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.

Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.

The Object Storage service has three rings to store the following types of data:

  • Account information
  • Containers, to facilitate organizing objects under an account
  • Object replicas

7.6. Rebalancing Object Storage rings

When you change the Object Storage service (swift) topology, for example by adding or removing PersistentVolumes, changing swiftStorage replicas, or adding nodes or disks on data plane nodes, the swift-operator rebalances the rings. If you use external data plane nodes, you must push the updated rings to the data plane nodes. After any rebalance, you must verify that all data has been moved to its new location before performing another rebalance.

Note

Successive rebalances must respect the minPartHours interval. A rebalance is only possible when swift-ring-builder shows 0:00:00 remaining.

Prerequisites

  • You have populated the dispersion tool before the rebalance. You only need to run this command once per cluster:

    $ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-dispersion-populate'

    This creates a set of test objects and containers that are distributed across all partitions in the ring. The dispersion tool uses these to verify that all replicas are in the expected locations. You must run this command before a rebalance so that the test objects are placed according to the current ring layout.

Procedure

  1. Manually trigger a ring rebalance:

    Important

    Only rebalance after any previous rebalance is fully replicated. Rebalancing before replication has completed can cause data to be moved multiple times, increasing the risk of temporary data unavailability.

    $ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-ring-tool rebalance'

    This rebalances all Object Storage rings and updates the ring files in the swift-ring-files ConfigMap.

  2. If you use external data plane nodes, push the updated rings to the data plane nodes. Delete any previous ring update deployment and create a new OpenStackDataPlaneDeployment CR that runs only the swift service with the swiftrings Ansible tag. This distributes the updated ring files to all data plane nodes without running a full deployment:

    $ oc delete --ignore-not-found --wait openstackdataplanedeployment swift-ring-update -n openstack
    $ oc apply -n openstack -f - << EOF
    apiVersion: dataplane.openstack.org/v1beta1
    kind: OpenStackDataPlaneDeployment
    metadata:
      name: swift-ring-update
    spec:
      ansibleTags: swiftrings
      servicesOverride:
        - swift
      nodeSets:
        - <nodeset_name>
    EOF
    • Replace <nodeset_name> with the name of your OpenStackDataPlaneNodeSet CR.
  3. Wait for the deployment to complete:

    $ oc wait openstackdataplanedeployment swift-ring-update --for condition=Ready --timeout=300s -n openstack
  4. Verify cluster health by checking that all rings are consistent and replication is progressing:

    $ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-recon -rT --md5'
    • Verify that all ring md5sums are consistent across nodes.
    • Check the replication timestamps. The oldest completion time minus the highest replication time gives you the earliest start time of the last replication pass. This earliest start time must be after the ring ConfigMaps have been updated, confirming that all nodes have completed a full replication pass with the new rings.
  5. Alternatively, you can check the object-replicator logs directly to verify that a full replication pass has completed:

    $ oc logs --timestamps -l component=swift-storage -c object-replicator --tail 1000 | grep replicated

    Look for log entries that show the completion of a replication pass, for example:

    $ oc logs --timestamps -c object-replicator swift-storage-0
    2026-04-02T10:44:13.863401909Z object-replicator: Object replication complete.  (33.00 minutes)

    The completion timestamp minus the duration gives you the start time of the pass. For example, a completion at 10:44:13 with a duration of 33.00 minutes means the pass started at 10:11:13. This start time must be after the ring ConfigMaps were updated. If the pass started before the ConfigMap update, it was still using the old rings and you must wait for the next full pass to complete.

  6. Run swift-dispersion-report to verify that all replicas have been placed correctly:

    $ oc debug --keep-labels=true job/swift-ring-rebalance -- /bin/sh -c 'swift-ring-tool get && swift-dispersion-report'

    The output shows the percentage of objects and containers with all replicas found. All values must be at 100% before you perform another rebalance.

7.7. Ring partition power

The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.

In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat Support.

When you use separate nodes for the Object Storage service (swift), use a higher partition power value.

The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.

You can only change the partition power parameter for new containers and their objects, so you must set this value before initial deployment.

The default partition power value is 10. Refer to the following table to select an appropriate partition power if you use three replicas:

Expand
Table 7.1. Appropriate partition power values per number of available disks

Partition Power

Maximum number of disks

10

~ 35

11

~ 75

12

~ 150

13

~ 250

14

~ 500

Important

Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and change the value for partPower under the swiftRing parameter in the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            partPower: 12
            ringReplicas: 3
    ...
    • Replace <12> with the value you want to set for partition power.

      Tip

      You can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.

Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top