이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 3. Regional-DR solution for OpenShift Data Foundation


3.1. Components of Regional-DR solution

Regional-DR is composed of Red Hat Advanced Cluster Management for Kubernetes and OpenShift Data Foundation components to provide application and data mobility across Red Hat OpenShift Container Platform clusters.

Red Hat Advanced Cluster Management for Kubernetes

Red Hat Advanced Cluster Management (RHACM))provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment.

RHACM is split into two parts:

  • RHACM Hub: includes components that run on the multi-cluster control plane.
  • Managed clusters: includes components that run on the clusters that are managed.

For more information about this product, see RHACM documentation and the RHACM “Manage Applications” documentation.

OpenShift Data Foundation

OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster.

OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack. Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications.

OpenShift Data Foundation stack is now enhanced with the following abilities for disaster recovery:

  • Enable RBD block pools for mirroring across OpenShift Data Foundation instances (clusters)
  • Ability to mirror specific images within an RBD block pool
  • Provides csi-addons to manage per Persistent Volume Claim (PVC) mirroring

OpenShift DR

OpenShift DR is a set of orchestrators to configure and manage stateful applications across a set of peer OpenShift clusters which are managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application’s state on Persistent Volumes. These include:

  • Protecting an application and its state relationship across OpenShift clusters
  • Failing over an application and its state to a peer cluster
  • Relocate an application and its state to the previously deployed cluster

OpenShift DR is split into three components:

  • ODF Multicluster Orchestrator: Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships
  • OpenShift DR Hub Operator: Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications.
  • OpenShift DR Cluster Operator: Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application.

3.2. Regional-DR deployment workflow

This section provides an overview of the steps required to configure and deploy Regional-DR capabilities using latest version of Red Hat OpenShift Data Foundation across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Red Hat Advanced Cluster Management (RHACM).

To configure your infrastructure, perform the below steps in the order given:

  1. Ensure requirements across the three: Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Regional-DR.
  2. Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Creating OpenShift Data Foundation cluster on managed clusters.
  3. Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster.
  4. Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters.
  5. Enable the Web Console. Enabling Multicluster Web Console
  6. Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster.

    Note

    There can be more than a single policy.

For testing your disaster recovery solution:

3.3. Requirements for enabling Regional-DR

Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:

  • You must have three OpenShift clusters that have network reachability between them:

    • Hub cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM operator) is installed.
    • Primary managed cluster where OpenShift Data Foundation is installed.
    • Secondary managed cluster where OpenShift Data Foundation is installed.
  • Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions.

    • Login to the RHACM console using your OpenShift credentials.
    • Find the Route that has been created for the RHACM console:

      $ oc get route multicloud-console -n open-cluster-management -o jsonpath --template="https://{.spec.host}/multicloud/clusters{'\n'}"

      Example Output:

      https://multicloud-console.apps.perf3.example.com/multicloud/clusters
    • Open your output link in a browser to login with OpenShift credentials. You should now see your local-cluster imported.
Important

It is the user’s responsibility to ensure that application traffic routing and redirection are configured appropriately. Configuration and updates to the application traffic routes are currently not supported.

  • Ensure that you have either imported or created the Primary managed cluster and the Secondary managed cluster using the RHACM console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster.
  • The managed clusters must have non-overlapping networks.

    To connect the managed OpenShift cluster and service networks using the Submariner add-ons, you need to validate that the two clusters have non-overlapping networks by running the following commands for each of the managed clusters.

    Note

    Version 0.12 of Submariner installed using RHACM 2.5 cluster add-ons, does not support the OpenShift OVNKubernetes CNI plugin. See RHACM 2.5 Release Notes.

    $ oc get networks.config.openshift.io cluster -o json | jq .spec

    Example output for Primary cluster:

    {
      "clusterNetwork": [
        {
          "cidr": "10.5.0.0/16",
          "hostPrefix": 23
        }
      ],
      "externalIP": {
        "policy": {}
      },
      "networkType": "OpenShiftSDN",
      "serviceNetwork": [
        "10.15.0.0/16"
      ]
    }

    Example output for Secondary cluster:

    {
      "clusterNetwork": [
        {
          "cidr": "10.6.0.0/16",
          "hostPrefix": 23
        }
      ],
      "externalIP": {
        "policy": {}
      },
      "networkType": "OpenShiftSDN",
      "serviceNetwork": [
        "10.16.0.0/16"
      ]
    }

    For more information, see Submariner add-ons documentation.

  • Ensure that the Managed clusters can connect using Submariner add-ons. After identifying and ensuring that the cluster and service networks have non-overlapping ranges, install the Submariner add-ons for each managed cluster using the RHACM console and Cluster sets. For instructions, see Submariner documentation.

    Caution

    Do not select Enable Globalnet because of overlapping cluster and service networks for the managed clusters. Using Globalnet is not supported with Regional Disaster Recovery currently. Ensure that cluster and service networks are non-overlapping before proceeding.

3.4. Creating an OpenShift Data Foundation cluster on managed clusters

In order to configure storage replication between the two OpenShift Container Platform clusters, create an OpenShift Data Foundation storage system after you install the OpenShift Data Foundation operator.

Note

Refer to OpenShift Data Foundation deployment guides and instructions that are specific to your infrastructure (AWS, VMware, BM, Azure, etc.).

Procedure

  1. Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters.

    For information about the OpenShift Data Foundation deployment, refer to your infrastructure specific deployment guides (for example, AWS, VMware, Bare metal, Azure).

  2. Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command:

    $ oc get storagecluster -n openshift-storage ocs-storagecluster -o jsonpath='{.status.phase}{"\n"}'

    For the Multicloud Gateway (MCG):

    $ oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'

    If the status result is Ready for both queries on the Primary managed cluster and the Secondary managed cluster, then continue with the next step.

Note

In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources and verify that Status of StorageCluster is Ready and has a green tick mark next to it.

3.5. Installing OpenShift Data Foundation Multicluster Orchestrator operator

OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform’s OperatorHub on the Hub cluster.

Procedure

  1. On the Hub cluster, navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator.
  2. Click ODF Multicluster Orchestrator tile.
  3. Keep all default settings and click Install.

    Ensure that the operator resources are installed in openshift-operators project and available to all namespaces.

    Note

    The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency.

  4. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace.

    $ oc get pods -n openshift-operators

    Example output:

    NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
    odf-multicluster-console-6845b795b9-blxrn   1/1     Running      0           4d20h
    odfmo-controller-manager-f9d9dfb59-jbrsd    1/1     Running      0           4d20h
    ramen-hub-operator-6fb887f885-fss4w         2/2     Running      0           4d20h

3.6. Configuring SSL access across clusters

Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets.

Note

If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped.

Procedure

  1. Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt.

    $ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crt
  2. Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt.

    $ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crt
  3. Create a new ConfigMap file to hold the remote cluster’s certificate bundle with filename cm-clusters-crt.yaml.

    Note

    There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before.

    apiVersion: v1
    data:
      ca-bundle.crt: |
        -----BEGIN CERTIFICATE-----
        <copy contents of cert1 from primary.crt here>
        -----END CERTIFICATE-----
    
        -----BEGIN CERTIFICATE-----
        <copy contents of cert2 from primary.crt here>
        -----END CERTIFICATE-----
    
        -----BEGIN CERTIFICATE-----
        <copy contents of cert3 primary.crt here>
        -----END CERTIFICATE----
    
        -----BEGIN CERTIFICATE-----
        <copy contents of cert1 from secondary.crt here>
        -----END CERTIFICATE-----
    
        -----BEGIN CERTIFICATE-----
        <copy contents of cert2 from secondary.crt here>
        -----END CERTIFICATE-----
    
        -----BEGIN CERTIFICATE-----
        <copy contents of cert3 from secondary.crt here>
        -----END CERTIFICATE-----
    kind: ConfigMap
    metadata:
      name: user-ca-bundle
      namespace: openshift-config
  4. Create the ConfigMap on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.

    $ oc create -f cm-clusters-crt.yaml

    Example output:

    configmap/user-ca-bundle created
  5. Patch default proxy resource on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.

    $ oc patch proxy cluster --type=merge  --patch='{"spec":{"trustedCA":{"name":"user-ca-bundle"}}}'

    Example output:

    proxy.config.openshift.io/cluster patched

3.7. Enabling Multicluster Web Console

This is a new capability that is required before creating a Data Policy or DRPolicy. It is only needed on the Hub cluster and RHACM 2.5 must be installed.

Important

Multicluster console is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

Procedure

  1. Navigate to Administration Cluster Settings Configuration FeatureGate.
  2. Edit the YAML template as follows:

    [...]
    spec:
      featureSet: TechPreviewNoUpgrade
  3. Click Save to enable the multicluster console for all clusters in the RHACM console. Wait for the Nodes to become Ready.
  4. Refresh the web console and verify that the managed cluster names are listed below All Clusters.
Warning

Do not set this feature gate on production clusters. You will not be able to upgrade your cluster after applying the feature gate, and it cannot be undone.

3.8. Creating Disaster Recovery Policy on Hub cluster

Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution.

The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console.

Prerequisites

  • Ensure that there is a minimum set of two managed clusters.
  • Make sure to login to all the clusters from the Multicluster Web console.

    • Click on All Clusters to expand the list of managed clusters.
    • For each managed cluster listed below All Clusters, click on the <cluster_name> and then wait for a login screen to appear, where you can login using the credentials of the cluster that you have selected.

Procedure

  1. On the OpenShift console, navigate to All Clusters.

    Multicluster console Data policies
  2. Navigate to Data Services and click Data policies.
  3. Click Create DRPolicy.
  4. Enter Policy name. Ensure that each DRPolicy has a unique name (for example: ocp4bos1-ocp4bos2-5m).
  5. Select two clusters from the list of managed clusters to which this new policy will be associated with.
  6. Replication policy is automaticaly set to Asynchronous(async) based on the OpenShift clusters selected and a Sync schedule option will become available.
  7. Set Sync schedule.

    Important

    For every desired replication interval a new DRPolicy must be created with a unique name (such as: ocp4bos1-ocp4bos2-10m). The same clusters can be selected but the Sync schedule can be configured with a different replication interval in minutes/hours/days. The minimum is one minute.

  8. Click Create.
  9. Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created.

    Note

    Replace <drpolicy_name> with your unique name.

    $ oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{"\n"}'

    Example output:

    Succeeded
    Note

    When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded.

  10. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster.

    1. Get the names of the DRClusters on the Hub cluster.

      $ oc get drclusters

      Example output:

      NAME        AGE
      ocp4bos1   4m42s
      ocp4bos2   4m42s
    2. Check S3 access to each bucket created on each managed cluster using this DRCluster validation command.

      Note

      Replace <drcluster_name> with your unique name.

      $ oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{"\n"}'

      Example output:

      Succeeded
      Note

      Make sure to run command for both DRClusters on the Hub cluster.

  11. Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster.

    $ oc get csv,pod -n openshift-dr-system

    Example output:

    NAME                                                                      DISPLAY                         VERSION   REPLACES   PHASE
    clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.11.0   Openshift DR Cluster Operator   4.11.0               Succeeded
    
    NAME                                             READY   STATUS    RESTARTS   AGE
    pod/ramen-dr-cluster-operator-5564f9d669-f6lbc   2/2     Running   0          5m32s

    You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed clusters.

  12. Verify that the status of the ODF mirroring daemon health on the Primary managed cluster and the Secondary managed cluster.

    $ oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'

    Example output:

    {"daemon_health":"OK","health":"OK","image_health":"OK","states":{}}
    Caution

    It could take up to 10 minutes for the daemon_health and health to go from Warning to OK. If the status does not become OK eventually then use the RHACM console to verify that the Submariner connection between managed clusters is still in a healthy state. Do not proceed until all values are OK.

3.9. Create sample application for testing disaster recovery solution

OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for applications that are managed by RHACM. See Managing Applications for more details.

This solution orchestrates RHACM application placement, using the PlacementRule, when an application is moved between clusters in a DRPolicy for failover or relocation requirements.

The following sections detail how to apply a DRPolicy to an application and how to manage the applications placement life-cycle during and after cluster unavailability.

Note

OpenShift Data Foundation DR solution does not support ApplicationSet, which is required for applications that are deployed via ArgoCD.

3.9.1. Creating a sample application

In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate, we need a simple application.

Prerequisites

  • When creating an application for general consumption, ensure that:

    • the application is deployed to ONLY one cluster.
    • the application is deployed prior to applying the DRPolicy to the application.
  • Use the sample application called busybox as an example.
  • Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated.

Procedure

  1. Log in to the RHACM console using your OpenShift credentials if not already logged in.

    $ oc get route multicloud-console -n open-cluster-management -o jsonpath --template="https://{.spec.host}/multicloud/applications{'\n'}"

    Example Output:

    https://multicloud-console.apps.perf3.example.com/multicloud/applications
  2. Navigate to Applications and click Create application.
  3. Select type as Subscription.
  4. Enter your application Name (for example, busybox) and Namespace (for example, busybox-sample).
  5. In the Repository location for resources section, select Repository type Git.
  6. Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created.

    Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples/tree/release-4.11 where the Branch is main and Path is busybox-odr.

  7. Scroll down in the form until you see Deploy application resources only on clusters matching specified labels and then add a label with its value set to the Primary managed cluster name in RHACM cluster list view.

    ACM Select cluster for deployment
  8. Click Create which is at the top right hand corner.

    On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology.

    Note

    To get more information, click on any of the topology elements and a window will appear on the right of the topology view.

  9. Validating the sample application deployment.

    Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated.

    Login to your managed cluster where busybox was deployed by RHACM.

    $ oc get pods,pvc -n busybox-sample

    Example output:

    NAME          READY   STATUS    RESTARTS   AGE
    pod/busybox   1/1     Running   0          6m
    
    NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/busybox-pvc   Bound    pvc-a56c138a-a1a9-4465-927f-af02afbbff37   5Gi        RWO            ocs-storagecluster-ceph-rbd   6m

3.9.2. Apply DRPolicy to sample application

  1. On the Hub cluster go back to the Multicluster Web console, navigate to All Clusters.
  2. Login to all the clusters listed under All Clusters.
  3. Navigate to Data Services and then click Data policies.
  4. Click the Actions menu at the end of DRPolicy to view the list of available actions.

    Apply DRPolicy
  5. Click Apply DRPolicy.
  6. When the Apply DRPolicy modal is displayed, select busybox application and enter PVC label as appname=busybox.

    Note

    When multiple placements rules under the same application or more than one application are selected, all PVCs within the application’s namespace will be protected by default.

  7. Click Apply.
  8. Verify that a DRPlacementControl or DRPC was created in the busybox-sample namespace on the Hub cluster and that it’s CURRENTSTATE shows as Deployed. This resource is used for both failover and relocate actions for this application.

    $ oc get drpc -n busybox-sample

    Example output:

    NAME                       AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE
    busybox-placement-1-drpc   6m59s   ocp4bos1                                            Deployed

3.9.3. Deleting sample application

You can delete the sample application busybox using the RHACM console.

Note

The instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters.

Procedure

  1. On the RHACM console, navigate to Applications.
  2. Search for the sample application to be deleted (for example, busybox).
  3. Click the Action Menu (⋮) next to the application you want to delete.
  4. Click Delete application.

    When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted.

  5. Select Remove application related resources checkbox to delete the Subscription and PlacementRule.
  6. Click Delete. This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on).
  7. In addition to the resources deleted using the RHACM console, the DRPlacementControl must also be deleted after deleting the busybox application.

    1. Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample.
    2. Click OpenShift DR Hub Operator and then click DRPlacementControl tab.
    3. Click the Action Menu (⋮) next to the busybox application DRPlacementControl that you want to delete.
    4. Click Delete DRPlacementControl.
    5. Click Delete.
Note

This process can be used to delete any application with a DRPlacementControl resource.

3.10. Application failover between managed clusters

A failover is performed when a managed cluster becomes unavailable, due to any reason.

This section provides instructions on how to failover the busybox sample application. The failover method is application based. Each application that is to be protected in this manner must have a corresponding DRPlacementControl resource in the application namespace.

3.10.1. Modify DRPlacementControl to failover

Prerequisite

  • Before initiating a failover, verify that the PEER READY of DRPlacementControl or DRPC in the busybox-sample namespace on the Hub cluster is True.

    $ oc get drpc -n busybox-sample -o wide

    Example output:

    NAME                       AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE    PROGRESSION   START TIME         	DURATION      	PEER READY
    busybox-placement-1-drpc   6m59s   ocp4bos1                                            Deployed        Completed 	 <timestamp>            <duration>  	True
Note

If PEER READY is not true, see disaster recovery related known issues as documented in Release Notes for possible workarounds.

Procedure

  1. On the Hub cluster, navigate to Installed Operators and then click Openshift DR Hub Operator.
  2. Click DRPlacementControl tab.

    Note

    Make sure to be in the busybox-sample namespace.

  3. Click DRPC busybox-placement-1-drpc and then the YAML view.
  4. Add the action and failoverCluster details as shown in the screenshot below.

    DRPlacementControl add action Failover

    Image show where to add the action Failover in the YAML view

    The failoverCluster should be the RHACM cluster name for the Secondary managed cluster.

  5. Click Save.
  6. Verify that the CURRENTSTATE of DRPlacementControl or DRPC in the busybox-sample namespace on the Hub cluster is FailedOver.

    $ oc get drpc -n busybox-sample

    Example output:

    NAME                       AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE
    busybox-placement-1-drpc   6m59s   ocp4bos1           ocp4bos2          Failover       FailedOver
  7. Verify that for the failover cluster ocp4bos2 as specified in the YAML file, the application busybox is now running in the Secondary managed cluster.

    $ oc get pods,pvc -n busybox-sample

    Example output:

    NAME          READY   STATUS    RESTARTS   AGE
    pod/busybox   1/1     Running   0          35s
    
    NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/busybox-pvc   Bound    pvc-79f2a74d-6e2c-48fb-9ed9-666b74cfa1bb   5Gi        RWO            ocs-storagecluster-ceph-rbd   35s
  8. Verify if busybox is running in the Primary managed cluster. The busybox application should no longer be running on this managed cluster.

    $ oc get pods,pvc -n busybox-sample

    Example output:

    No resources found in busybox-sample namespace.
  9. Verify if the DNS routes for the application are configured correctly. If the external routes are not configured, you can use either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service to reconfigure the routes.
Important

Be aware of known DR issues as documented in Known Issues section of Release Notes.

3.11. Relocating an application between managed clusters

A relocation operation is very similar to failover. Relocate is application based and uses the DRPlacementControl to trigger the relocation.

Relocation is performed once the failed cluster is available and the application resources are cleaned up on the failed cluster.

In this case the action is Relocate back to the preferredCluster.

3.11.1. Modify DRPlacementControl to Relocate

Prerequisite

  • Before initiating a relocate, verify that the PEER READY of DRPlacementControl or DRPC in the busybox-sample namespace on the Hub cluster is True.

    $ oc get drpc -n busybox-sample -o wide

    Example output:

    NAME                       AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE    PROGRESSION   START TIME         	DURATION      	PEER READY
    busybox-placement-1-drpc   6m59s   ocp4bos1           ocp4bos2          Failover       FailedOver      Completed     <timestamp>            <duration>  	True
Note

If PEER READY is not true, see disaster recovery related known issues as documented in Release Notes for possible workarounds.

Procedure

  1. On the Hub cluster, navigate to Installed Operators and then click Openshift DR Hub Operator.
  2. Click DRPlacementControl tab.
  3. Click DRPC busybox-placement-1-drpc and then the YAML view.
  4. Modify action to Relocate.

    DRPlacementControl modify action to Relocate

    Image show where to modify the action in the YAML view

  5. Click Save.
  6. Verify that the CURRENTSTATE of DRPlacementControl or DRPC in the busybox-sample namespace on the Hub cluster is Relocated.

    $ oc get drpc -n busybox-sample

    Example output:

    NAME                       AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE
    busybox-placement-1-drpc   6m59s   ocp4bos1           ocp4bos2          Relocate       Relocated
  7. Verify if the application busybox is now running in the Primary managed cluster. The relocate is to the preferredCluster ocp4bos1 as specified in the YAML file, which is where the application was running before the failover operation.

    $ oc get pods,pvc -n busybox-sample

    Example output:

    NAME          READY   STATUS    RESTARTS   AGE
    pod/busybox   1/1     Running   0          60s
    
    NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
    persistentvolumeclaim/busybox-pvc   Bound    pvc-79f2a74d-6e2c-48fb-9ed9-666b74cfa1bb   5Gi        RWO            ocs-storagecluster-ceph-rbd   61s
  8. Verify if busybox is running in the Secondary managed cluster. The busybox application should no longer be running on this managed cluster.

    $ oc get pods,pvc -n busybox-sample

    Example output:

    No resources found in busybox-sample namespace.
  9. Verify if the DNS routes for the application are configured correctly. If the external routes are not configured, you can use either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service to reconfigure the routes.
Important

Be aware of known DR issues as documented in Known Issues section of Release Notes.

Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.