Chapter 7. Configuring the Object Storage service (swift)


Configure the Object Storage service (swift) to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes.

OpenShift deployments are limited to one PV per node. However, the Object Storage service requires multiple PVs. To maximize availability and data durability, you create these PVs on different nodes, and only use one PV per node. External data plane nodes offer more flexibility for larger deployments with multiple disks per node.

For information about configuring the Object Storage service as an endpoint for the Red Hat Ceph Storage Object Gateway (RGW), see Configuring an external Ceph Object Gateway back end.

7.1. Prerequisites

  • You have the oc command line tool installed on your workstation.
  • You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges.

You use at least two swiftProxy replicas and three swiftStorage replicas in a default Object Storage service (swift) deployment. You can increase these values to distribute storage across more nodes and disks.

The ringReplicas value defines the number of object copies in the cluster. For example, if you set ringReplicas: 3 and swiftStorage/replicas: 5, every object is stored on 3 different PersistentVolumes (PVs), and there are 5 PVs in total.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and add the following parameters to the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            ringReplicas: 3
          swiftStorage:
            replicas: 3
            storageClass: <swift-storage>
            storageRequest: 100Gi
    ...
    Copy to Clipboard Toggle word wrap
    • Increase the swiftProxy/replicas: value to distribute proxy instances across more nodes.
    • Replace the ringReplicas: value to define the number of object copies you want in your cluster.
    • Increase the swiftStorage/replicas: value to define the number of PVs in your cluster.
    • Replace <swift-storage> with the name of the storage class you want the Object Storage service to use.
  2. Update the control plane:

    $ oc apply -f openstack_control_plane.yaml -n openstack
    Copy to Clipboard Toggle word wrap
  3. Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status:

    $ oc get openstackcontrolplane -n openstack
    Copy to Clipboard Toggle word wrap

    The OpenStackControlPlane resources are created when the status is "Setup complete".

    Tip

    Append the -w option to the end of the get command to track deployment progress.

If you operate large clusters with a lot of storage in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, you can deploy the Object Storage service (swift) on external data plane nodes. With this configuration, the Object Storage proxy service continues to run on on the control plane and the Object Storage services run on the data plane nodes.

Note

If you do not want to use persistent volumes for data storage, set swiftStorage replicas to 0 in the OpenStackControlPlane CR. When initially creating the OpenStackControlPlane CR, you must also set swiftProxy replicas to 0. This is necessary because the proxies for the Object Storage service require properly built rings with at least the configured number of replica devices to start. Once the data plane is deployed, you can then scale the swiftProxy replicas to the number you want.

To deploy and run the Object Storage services on data plane nodes, first you enable DNS forwarding to resolve data plane host names in the control plane pods, and then you create an OpenStackDataPlaneNodeSet CR with the following properties:

  • The swift service
  • A list of disks to be used for Object Storage service storage

Procedure

  1. Enable DNS forwarding to resolve data plane hostnames in the control plane pods.

    1. Obtain the clusterIP of the resolver:

      $ oc get svc dnsmasq-dns -o jsonpath=`{.spec.clusterIP}`
      Copy to Clipboard Toggle word wrap
    2. Update the default DNS entry to add the clusterIP of the resolver:

      apiVersion: operator.openshift.io/v1
      kind: DNS
      metadata:
        name: default
      spec:
        servers:
        - name: swift
          zones:
          - storage.example.com
          forwardPlugin:
            policy: Random
            upstreams:
            - <clusterIP>
      Copy to Clipboard Toggle word wrap
      • Replace <clusterIP> with the clusterIP of the resolver.
  2. Enable the swift storage service on the data plane nodes by adding the swift service to the end of the list of services for the NodeSet in your OpenStackDataPlaneNodeSet CR. The service runs the playbooks that are required to configure the Object Storage services:

    Example:

        services:
        - repo-setup
        - bootstrap
        - download-cache
        - configure-network
        - validate-network
        - install-os
        - configure-os
        - ssh-known-hosts
        - run-os
        - reboot-os
        - install-certs
        - swift
    Copy to Clipboard Toggle word wrap
  3. Define disks to be used by the Object Storage service on data plane nodes.

    • When you define disks, you can do the following:

      • Define the disks in the global nodeTemplate section in your OpenStackDataPlaneNodeSet CR to use the same type of disks for all nodes.
      • Define disks on a per-node basis in the nodes section of your OpenStackDataPlaneNodeSet CR.
      • Assign disks to a specific region or zone.
      • Enable ring management to distribute replicas.
    • You must specify a weight for each disk. If you do not have custom weights in your existing rings, you can set the weight to the GiB capacity of the disk.

      The following example shows the OpenStackDataPlaneNodeSet CR for a data plane with three storage nodes. Each node is configured to use two disks in the nodeTemplate section. The first node edpm-swift-0 is configured to use a third disk in the nodes section:

      Example:

      - apiVersion: dataplane.openstack.org/v1beta1
        kind: OpenStackDataPlaneNodeSet
        metadata:
          name: openstack-edpm-ipam
          namespace: openstack
        spec:
          ...
          networkAttachments:
          - ctlplane
          - storage
          nodeTemplate:
            ansible:
              ansibleVars:
                edpm_swift_disks:
                - device: /dev/vdb
                  path: /srv/node/vdb
                  region: 0
                  weight: 4000
                  zone: 0
                - device: /dev/vdc
                  path: /srv/node/vdc
                  region: 0
                  weight: 4000
                  zone: 0
          nodes:
            edpm-swift-0:
              ansible:
                ansibleVars:
                  edpm_swift_disks:
                  - device: /dev/vdd
                    path: /srv/node/vdd
                    weight: 1000
              hostName: edpm-swift-0
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.100
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
            edpm-swift-1:
              hostName: edpm-swift-1
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.101
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
            edpm-swift-2:
              hostName: edpm-swift-2
              networks:
              - defaultRoute: true
                fixedIP: 192.168.122.102
                name: ctlplane
                subnetName: subnet1
              - name: internalapi
                subnetName: subnet1
              - name: storage
                subnetName: subnet1
              - name: tenant
                subnetName: subnet1
          ...
          services:
          - repo-setup
          - bootstrap
          - download-cache
          - configure-network
          - validate-network
          - install-os
          - configure-os
          - ssh-known-hosts
          - run-os
          - reboot-os
          - install-certs
          - swift
      Copy to Clipboard Toggle word wrap

7.4. Object Storage rings

The Object Storage service (swift) uses a data structure called the ring to distribute partition space across the cluster. This partition space is core to the data durability engine in the Object Storage service. With rings, the Object Storage service can quickly and easily synchronize each partition across the cluster.

Rings contain information about Object Storage partitions and how partitions are distributed among the different nodes and disks in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. When any Object Storage component interacts with data, a quick lookup is performed locally in the ring to determine the possible partitions for each object.

The Object Storage service has three rings to store the following types of data:

  • Account information
  • Containers, to facilitate organizing objects under an account
  • Object replicas

7.5. Ring partition power

The ring power determines the partition to which a resource, such as an account, container, or object, is mapped. The partition is included in the path under which the resource is stored in a back-end file system. Therefore, changing the partition power requires relocating resources to new paths in the back-end file systems.

In a heavily populated cluster, a relocation process is time consuming. To avoid downtime, relocate resources while the cluster is still operating. You must do this without temporarily losing access to data or compromising the performance of processes, such as replication and auditing. For assistance with increasing ring partition power, contact Red Hat Support.

When you use separate nodes for the Object Storage service (swift), use a higher partition power value.

The Object Storage service distributes data across disks and nodes using modified hash rings. There are three rings by default: one for accounts, one for containers, and one for objects. Each ring uses a fixed parameter called partition power. This parameter sets the maximum number of partitions that can be created.

You can only change the partition power parameter for new containers and their objects, so you must set this value before initial deployment.

The default partition power value is 10. Refer to the following table to select an appropriate partition power if you use three replicas:

Expand
Table 7.1. Appropriate partition power values per number of available disks

Partition Power

Maximum number of disks

10

~ 35

11

~ 75

12

~ 150

13

~ 250

14

~ 500

Important

Setting an excessively high partition power value (for example, 14 for only 40 disks) negatively impacts replication times.

Procedure

  1. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml, and change the value for partPower under the swiftRing parameter in the swift template:

    apiVersion: core.openstack.org/v1beta1
    kind: OpenStackControlPlane
    metadata:
      name: openstack-control-plane
      namespace: openstack
    spec:
      ...
      swift:
        enabled: true
        template:
          swiftProxy:
            replicas: 2
          swiftRing:
            partPower: 12
            ringReplicas: 3
    ...
    Copy to Clipboard Toggle word wrap
    • Replace <12> with the value you want to set for partition power.

      Tip

      You can also configure an additional object server ring for new containers. This is useful if you want to add more disks to an Object Storage service deployment that initially uses a low partition power.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat