OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Chapter 5. Configuring multisite storage replication
Mirroring or replication is enabled on a per CephBlockPool
basis within peer managed clusters and can then be configured on a specific subset of images within the pool. The rbd-mirror
daemon is responsible for replicating image updates from the local peer cluster to the same image in the remote cluster.
These instructions detail how to create the mirroring relationship between two OpenShift Data Foundation managed clusters.
5.1. Installing OpenShift Data Foundation Multicluster Orchestrator
OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform’s OperatorHub on the Hub cluster. This Multicluster Orchestrator controller, along with the MirrorPeer custom resource, creates a bootstrap token and exchanges this token between the managed clusters.
Procedure
- Navigate to OperatorHub on the Hub cluster and use the keyword filter to search for ODF Multicluster Orchestrator.
- Click ODF Multicluster Orchestrator tile.
Keep all default settings and click Install.
The operator resources are installed in
openshift-operators
and available to all namespaces.Verify that the ODF Multicluster Orchestrator has installed successfully.
- Validate successful installation by having the ability to select View Operator.
Verify that the operator Pod are in
Running
state.$ oc get pods -n openshift-operators
Example output:
NAME READY STATUS RESTARTS AGE odfmo-controller-manager-65946fb99b-779v8 1/1 Running 0 5m3s
5.2. Creating mirror peer on hub cluster
Mirror Peer is a cluster-scoped resource to hold information about the managed clusters that will have a peer-to-peer relationship.
Prerequisites
- Ensure that ODF Multicluster Orchestrator is installed on the Hub cluster.
- You must have only two clusters per Mirror Peer.
-
Ensure that each cluster has uniquely identifiable cluster names such as
ocp4perf1
andocp4perf2
.
Procedure
Click ODF Multicluster Orchestrator to view the operator details.
You can also click View Operator after the Multicluster Orchestrator is installed successfully.
- Click on Mirror Peer API Create instance and then select YAML view.
Copy and save the following YAML to filename
mirror-peer.yaml
after replacing <cluster1> and <cluster2> with the correct names of your managed clusters in the RHACM console.apiVersion: multicluster.odf.openshift.io/v1alpha1 kind: MirrorPeer metadata: name: mirrorpeer-<cluster1>-<cluster2> spec: items: - clusterName: <cluster1> storageClusterRef: name: ocs-storagecluster namespace: openshift-storage - clusterName: <cluster2> storageClusterRef: name: ocs-storagecluster namespace: openshift-storage manageS3: true schedulingIntervals: - 5m - 15m
NoteThe time values (e.g. 5m) for
schedulingIntervals
will be used to configure the desired interval for replicating persistent volumes. These values can be mapped to your Recovery Point Objective (RPO) for critical applications. Modify the values inschedulingIntervals
to be correct for your application requirements. The minimum value is1m
and the default is5m
.-
Copy the contents of your unique
mirror-peer.yaml
file into theYAML view
. You must completely replace the original content. - Click Create at the bottom of the YAML view screen.
-
Verify that you can view Phase status as
ExchangedSecret
before proceeding.
5.3. Validating Ceph mirroring on managed clusters
Perform the following validations on the Primary managed cluster and the Secondary managed cluster to check Ceph mirroring is active:
Verify that
mirroring
is enabled on the defaultCeph block pool
.$ oc get cephblockpool -n openshift-storage -o=jsonpath='{.items[?(@.metadata.ownerReferences[*].kind=="StorageCluster")].spec.mirroring.enabled}{"\n"}'
Example output:
true
Verify that the
rbd-mirror
pod is up and running.$ oc get pods -o name -l app=rook-ceph-rbd-mirror -n openshift-storage
Example output:
pod/rook-ceph-rbd-mirror-a-6486c7d875-56v2v
Check the status of the
daemon
health to ensure it is OK.$ oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'
Example output:
{"daemon_health":"OK","health":"OK","image_health":"OK","states":{}}
NoteIt could take up to 10 minutes for the daemon_health and health fields to change from
Warning
toOK
. If the status does not become OK after 10 minutes then use the Advanced Cluster Manager console to verify that thesubmariner add-on
connection is still in a healthy state.Verify that VolumeReplicationClass is created on the Primary managed cluster and the Secondary managed cluster for each schedulingIntervals listed in the MirrorPeer (e.g. 5m, 15m).
$ oc get volumereplicationclass
Example output:
NAME PROVISIONER rbd-volumereplicationclass-1625360775 openshift-storage.rbd.csi.ceph.com rbd-volumereplicationclass-539797778 openshift-storage.rbd.csi.ceph.com
NoteThe
VolumeReplicationClass
is used to specify themirroringMode
for each volume to be replicated as well as how often a volume or image is replicated (for example, every 5 minutes) from the local cluster to the remote cluster.
5.4. Validating object buckets and S3StoreProfiles
Perform the following validations on the Primary managed cluster and the Secondary managed cluster to check Ceph mirroring is active.
Procedure
Verify that there is a new Object Bucket Claim and corresponding Object Bucket in the Primary managed cluster and the Secondary managed cluster in the
openshift-storage
namespace.$ oc get obc,ob -n openshift-storage
Example output:
NAME STORAGE-CLASS PHASE AGE objectbucketclaim.objectbucket.io/odrbucket-21eb5332f6b6 openshift-storage.noobaa.io Bound 13m NAME STORAGE-CLASS CLAIM-NAMESPACE CLAIM-NAME RECLAIM-POLICY PHASE AGE objectbucket.objectbucket.io/obc-openshift-storage-odrbucket-21eb5332f6b6 openshift-storage.noobaa.io Delete Bound 13m
Verify that there are two new Secrets in the Hub cluster
openshift-dr-system
namespace that contain the access and secret key for each new Object Bucket Class.$ oc get secrets -n openshift-dr-system | grep Opaque
Example output:
8b3fb9ed90f66808d988c7edfa76eba35647092 Opaque 2 16m af5f82f21f8f77faf3de2553e223b535002e480 Opaque 2 16m
The OBC and Secrets are written in the ConfigMap
ramen-hub-operator-config
on the Hub cluster in the newly createds3StoreProfiles
section.$ oc get cm ramen-hub-operator-config -n openshift-dr-system -o yaml | grep -A 14 s3StoreProfiles
Example output:
s3StoreProfiles: - s3Bucket: odrbucket-21eb5332f6b6 s3CompatibleEndpoint: https://s3-openshift-storage.apps.perf2.example.com s3ProfileName: s3profile-ocp4perf2-ocs-storagecluster s3Region: noobaa s3SecretRef: name: 8b3fb9ed90f66808d988c7edfa76eba35647092 namespace: openshift-dr-system - s3Bucket: odrbucket-21eb5332f6b6 s3CompatibleEndpoint: https://s3-openshift-storage.apps.perf1.example.com s3ProfileName: s3profile-ocp4perf1-ocs-storagecluster s3Region: noobaa s3SecretRef: name: af5f82f21f8f77faf3de2553e223b535002e480 namespace: openshift-dr-system
NoteRecord the names of the
s3ProfileName
. They will be used in the DRPolicy resource.