Chapter 5. Configuring multisite storage replication


Mirroring or replication is enabled on a per CephBlockPool basis within peer managed clusters and can then be configured on a specific subset of images within the pool. The rbd-mirror daemon is responsible for replicating image updates from the local peer cluster to the same image in the remote cluster.

These instructions detail how to create the mirroring relationship between two OpenShift Data Foundation managed clusters.

5.1. Installing OpenShift Data Foundation Multicluster Orchestrator

OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform’s OperatorHub on the Hub cluster. This Multicluster Orchestrator controller, along with the MirrorPeer custom resource, creates a bootstrap token and exchanges this token between the managed clusters.

Procedure

  1. Navigate to OperatorHub on the Hub cluster and use the keyword filter to search for ODF Multicluster Orchestrator.
  2. Click ODF Multicluster Orchestrator tile.
  3. Keep all default settings and click Install.

    The operator resources are installed in openshift-operators and available to all namespaces.

  4. Verify that the ODF Multicluster Orchestrator has installed successfully.

    1. Validate successful installation by having the ability to select View Operator.
    2. Verify that the operator Pod are in Running state.

      $ oc get pods -n openshift-operators

      Example output:

      NAME                                        READY   STATUS    RESTARTS   AGE
      odfmo-controller-manager-65946fb99b-779v8   1/1     Running   0          5m3s

5.2. Creating mirror peer on hub cluster

Mirror Peer is a cluster-scoped resource to hold information about the managed clusters that will have a peer-to-peer relationship.

Prerequisites

  • Ensure that ODF Multicluster Orchestrator is installed on the Hub cluster.
  • You must have only two clusters per Mirror Peer.
  • Ensure that each cluster has uniquely identifiable cluster names such as ocp4perf1 and ocp4perf2.

Procedure

  1. Click ODF Multicluster Orchestrator to view the operator details.

    You can also click View Operator after the Multicluster Orchestrator is installed successfully.

  2. Click on Mirror Peer API Create instance and then select YAML view.
  3. Copy and save the following YAML to filename mirror-peer.yaml after replacing <cluster1> and <cluster2> with the correct names of your managed clusters in the RHACM console.

    apiVersion: multicluster.odf.openshift.io/v1alpha1
    kind: MirrorPeer
    metadata:
      name: mirrorpeer-<cluster1>-<cluster2>
    spec:
      items:
      - clusterName: <cluster1>
        storageClusterRef:
          name: ocs-storagecluster
          namespace: openshift-storage
      - clusterName: <cluster2>
        storageClusterRef:
          name: ocs-storagecluster
          namespace: openshift-storage
      manageS3: true
      schedulingIntervals:
      - 5m
      - 15m
    Note

    The time values (e.g. 5m) for schedulingIntervals will be used to configure the desired interval for replicating persistent volumes. These values can be mapped to your Recovery Point Objective (RPO) for critical applications. Modify the values in schedulingIntervals to be correct for your application requirements. The minimum value is 1m and the default is 5m.

  4. Copy the contents of your unique mirror-peer.yaml file into the YAML view. You must completely replace the original content.
  5. Click Create at the bottom of the YAML view screen.
  6. Verify that you can view Phase status as ExchangedSecret before proceeding.

5.3. Validating Ceph mirroring on managed clusters

Perform the following validations on the Primary managed cluster and the Secondary managed cluster to check Ceph mirroring is active:

  1. Verify that mirroring is enabled on the default Ceph block pool.

    $ oc get cephblockpool -n openshift-storage -o=jsonpath='{.items[?(@.metadata.ownerReferences[*].kind=="StorageCluster")].spec.mirroring.enabled}{"\n"}'

    Example output:

    true
  2. Verify that the rbd-mirror pod is up and running.

    $ oc get pods -o name -l app=rook-ceph-rbd-mirror -n openshift-storage

    Example output:

    pod/rook-ceph-rbd-mirror-a-6486c7d875-56v2v
  3. Check the status of the daemon health to ensure it is OK.

    $ oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'

    Example output:

    {"daemon_health":"OK","health":"OK","image_health":"OK","states":{}}
    Note

    It could take up to 10 minutes for the daemon_health and health fields to change from Warning to OK. If the status does not become OK after 10 minutes then use the Advanced Cluster Manager console to verify that the submariner add-on connection is still in a healthy state.

  4. Verify that VolumeReplicationClass is created on the Primary managed cluster and the Secondary managed cluster for each schedulingIntervals listed in the MirrorPeer (e.g. 5m, 15m).

    $ oc get volumereplicationclass

    Example output:

    NAME                                    PROVISIONER
    rbd-volumereplicationclass-1625360775   openshift-storage.rbd.csi.ceph.com
    rbd-volumereplicationclass-539797778    openshift-storage.rbd.csi.ceph.com
    Note

    The VolumeReplicationClass is used to specify the mirroringMode for each volume to be replicated as well as how often a volume or image is replicated (for example, every 5 minutes) from the local cluster to the remote cluster.

5.4. Validating object buckets and S3StoreProfiles

Perform the following validations on the Primary managed cluster and the Secondary managed cluster to check Ceph mirroring is active.

Procedure

  1. Verify that there is a new Object Bucket Claim and corresponding Object Bucket in the Primary managed cluster and the Secondary managed cluster in the openshift-storage namespace.

    $ oc get obc,ob -n openshift-storage

    Example output:

    NAME                                                       STORAGE-CLASS                 PHASE   AGE
    objectbucketclaim.objectbucket.io/odrbucket-21eb5332f6b6   openshift-storage.noobaa.io   Bound   13m
    
    NAME                                                                        STORAGE-CLASS                 CLAIM-NAMESPACE   CLAIM-NAME   RECLAIM-POLICY   PHASE   AGE
    objectbucket.objectbucket.io/obc-openshift-storage-odrbucket-21eb5332f6b6   openshift-storage.noobaa.io                                  Delete         Bound   13m
  2. Verify that there are two new Secrets in the Hub cluster openshift-dr-system namespace that contain the access and secret key for each new Object Bucket Class.

    $ oc get secrets -n openshift-dr-system | grep Opaque

    Example output:

    8b3fb9ed90f66808d988c7edfa76eba35647092   Opaque		2      16m
    af5f82f21f8f77faf3de2553e223b535002e480   Opaque		2      16m
  3. The OBC and Secrets are written in the ConfigMap ramen-hub-operator-config on the Hub cluster in the newly created s3StoreProfiles section.

    $ oc get cm ramen-hub-operator-config -n openshift-dr-system -o yaml | grep -A 14 s3StoreProfiles

    Example output:

    s3StoreProfiles:
    - s3Bucket: odrbucket-21eb5332f6b6
      s3CompatibleEndpoint: https://s3-openshift-storage.apps.perf2.example.com
      s3ProfileName: s3profile-ocp4perf2-ocs-storagecluster
      s3Region: noobaa
      s3SecretRef:
        name: 8b3fb9ed90f66808d988c7edfa76eba35647092
        namespace: openshift-dr-system
    - s3Bucket: odrbucket-21eb5332f6b6
      s3CompatibleEndpoint: https://s3-openshift-storage.apps.perf1.example.com
      s3ProfileName: s3profile-ocp4perf1-ocs-storagecluster
      s3Region: noobaa
      s3SecretRef:
        name: af5f82f21f8f77faf3de2553e223b535002e480
        namespace: openshift-dr-system
    Note

    Record the names of the s3ProfileName. They will be used in the DRPolicy resource.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat, Inc.