Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads
Red Hat OpenShift Data Foundation Metro-DR feature with Red Hat Advanced Cluster Management for Kubernetes 2.7 is General Available now and the Regional-DR solution for both Blocks and Files is offered as Technology Preview and is subject to Technology Preview support limitations.
Abstract
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Bugzilla ticket:
- Go to the Bugzilla website.
- In the Component section, choose documentation.
- Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
- Click Submit Bug.
Chapter 1. Introduction to OpenShift Data Foundation Disaster Recovery Copy linkLink copied to clipboard!
Disaster recovery (DR) is the ability to recover and continue business critical applications from natural or human created disasters. It is a component of the overall business continuance strategy of any major organization as designed to preserve the continuity of business operations during major adverse events.
The OpenShift Data Foundation DR capability enables DR across multiple Red Hat OpenShift Container Platform clusters, and is categorized as follows:
Metro-DR
Metro-DR ensures business continuity during the unavailability of a data center with no data loss. In the public cloud these would be similar to protecting from an Availability Zone failure.
Regional-DR
Regional-DR ensures business continuity during the unavailability of a geographical region, accepting some loss of data in a predictable amount. In the public cloud this would be similar to protecting from a region failure.
Zone failure in Metro-DR and region failure in Regional-DR is usually expressed using the terms, Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
- RPO is a measure of how frequently you take backups or snapshots of persistent data. In practice, the RPO indicates the amount of data that will be lost or need to be reentered after an outage.
- RTO is the amount of downtime a business can tolerate. The RTO answers the question, “How long can it take for our system to recover after we are notified of a business disruption?”
The intent of this guide is to detail the Disaster Recovery steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then relocate the same application to the original primary cluster.
Chapter 2. Disaster recovery subscription requirement Copy linkLink copied to clipboard!
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution:
- A valid Red Hat OpenShift Data Foundation Advanced entitlement
- A valid Red Hat Advanced Cluster Management for Kubernetes subscription
Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters.
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
Chapter 3. Metro-DR solution for OpenShift Data Foundation Copy linkLink copied to clipboard!
The section of the guide provides details of the Metro Disaster Recovery (Metro DR) steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the OpenShift Container Platform clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM) and have distance limitations between the OpenShift Container Platform clusters of less than 10ms RTT latency.
The persistent storage for applications is provided by an external Red Hat Ceph Storage (RHCS) cluster stretched between the two locations with the OpenShift Container Platform instances connected to this storage cluster. An arbiter node with a storage monitor service is required at a third location (different location than where OpenShift Container Platform instances are deployed) to establish quorum for the RHCS cluster in the case of a site outage. This third location can be in the range of ~100ms RTT from the storage cluster connected to the OpenShift Container Platform instances.
This is a general overview of the Metro DR steps required to configure and execute OpenShift Disaster Recovery (ODR) capabilities using OpenShift Data Foundation and RHACM across two distinct OpenShift Container Platform clusters separated by distance. In addition to these two clusters called managed clusters, a third OpenShift Container Platform cluster is required that will be the Red Hat Advanced Cluster Management (RHACM) hub cluster.
3.1. Components of Metro-DR solution Copy linkLink copied to clipboard!
Metro-DR is composed of Red Hat Advanced Cluster Management for Kubernetes, Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters.
Red Hat Advanced Cluster Management for Kubernetes
Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment.
RHACM is split into two parts:
- RHACM Hub: components that run on the multi-cluster control plane
- Managed clusters: components that run on the clusters that are managed
For more information about this product, see RHACM documentation and the RHACM “Manage Applications” documentation.
Red Hat Ceph Storage
Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. It significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments.
For more product information, see Red Hat Ceph Storage.
OpenShift Data Foundation
OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. It is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack and Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications.
OpenShift DR
OpenShift DR is a disaster recovery orchestrator for stateful applications across a set of peer OpenShift clusters which are deployed and managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application’s state on Persistent Volumes. These include:
- Protecting an application and its state relationship across OpenShift clusters
- Failing over an application and its state to a peer cluster
- Relocate an application and its state to the previously deployed cluster
OpenShift DR is split into three components:
- ODF Multicluster Orchestrator: Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships.
- OpenShift DR Hub Operator: Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications.
- OpenShift DR Cluster Operator: Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application.
3.2. Metro-DR deployment workflow Copy linkLink copied to clipboard!
This section provides an overview of the steps required to configure and deploy Metro-DR capabilities using the latest versions of Red Hat OpenShift Data Foundation, Red Hat Ceph Storage (RHCS) and Red Hat Advanced Cluster Management for Kubernetes (RHACM) version 2.7 or later, across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Advanced Cluster Management.
To configure your infrastructure, perform the below steps in the order given:
- Ensure requirements across the Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Metro-DR.
- Ensure you meet the requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter. See Requirements for deploying Red Hat Ceph Storage.
- Deploy and configure Red Hat Ceph Storage stretch mode. For instructions on enabling Ceph cluster on two different data centers using stretched mode functionality, see Deploying Red Hat Ceph Storage.
- Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Installing OpenShift Data Foundation on managed clusters.
- Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster.
- Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters.
Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster.
NoteThe Metro-DR solution can only have one DRpolicy.
For testing your disaster recovery solution:
- Create a sample application using RHACM console. See Creating sample application.
- Test failover and relocate operations using the sample application between managed clusters. See application failover and relocating an application.
3.3. Requirements for enabling Metro-DR Copy linkLink copied to clipboard!
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:
You must have the following OpenShift clusters that have network reachability between them:
- Hub cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM operator) is installed.
- Primary managed cluster where OpenShift Data Foundation is installed.
- Secondary managed cluster where OpenShift Data Foundation is installed.
Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions.
After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect.
ImportantIt is the user’s responsibility to ensure that application traffic routing and redirection are configured appropriately. Configuration and updates to the application traffic routes are currently not supported.
- On the Hub cluster, navigate to All Clusters → Infrastructure → Clusters. Ensure that you either import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. After the managed clusters are successfully created or imported, you can see the list of clusters that were imported or created on the console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster.
There are distance limitations between the locations where the OpenShift Container Platform managed clusters reside as well as the Red Hat Ceph Storage (RHCS) nodes. The network latency between the sites must be below 10 milliseconds round-trip time (RTT).
3.4. Requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter Copy linkLink copied to clipboard!
Red Hat Ceph Storage is an open-source enterprise platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data, so you can focus on the applications and workloads that use it.
This section provides a basic overview of the Red Hat Ceph Storage deployment. For more complex deployment, refer to the official documentation guide for Red Hat Ceph Storage 5.
Only Flash media is supported since it runs with min_size=1 when degraded. Use stretch mode only with all-flash OSDs. Using all-flash OSDs minimizes the time needed to recover once connectivity is restored, thus minimizing the potential for data loss.
Erasure coded pools cannot be used with stretch mode.
3.4.1. Hardware requirements Copy linkLink copied to clipboard!
For information on minimum hardware requirements for deploying Red Hat Ceph Storage, see Minimum hardware recommendations for containerized Ceph.
| Node name | Datacenter | Ceph components |
|---|---|---|
| ceph1 | DC1 | OSD+MON+MGR |
| ceph2 | DC1 | OSD+MON |
| ceph3 | DC1 | OSD+MDS+RGW |
| ceph4 | DC2 | OSD+MON+MGR |
| ceph5 | DC2 | OSD+MON |
| ceph6 | DC2 | OSD+MDS+RGW |
| ceph7 | DC3 | MON |
3.4.2. Software requirements Copy linkLink copied to clipboard!
Use the latest software version of Red Hat Ceph Storage 5.
For more information on the supported Operating System versions for Red Hat Ceph Storage, see knowledgebase article on Red Hat Ceph Storage: Supported configurations.
3.4.3. Network configuration requirements Copy linkLink copied to clipboard!
The recommended Red Hat Ceph Storage configuration is as follows:
- You must have two separate networks, one public network and one private network.
You must have three different datacenters that support VLANS and subnets for Cephs private and public network for all datacenters.
NoteYou can use different subnets for each of the datacenters.
- The latencies between the two datacenters running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For the arbiter datacenter, this was tested with values as high up to 100 ms RTT to the other two OSD datacenters.
Here is an example of a basic network configuration that we have used in this guide:
- DC1: Ceph public/private network: 10.0.40.0/24
- DC2: Ceph public/private network: 10.0.40.0/24
- DC3: Ceph public/private network: 10.0.40.0/24
For more information on the required network environment, see Ceph network configuration.
3.5. Deploying Red Hat Ceph Storage Copy linkLink copied to clipboard!
3.5.1. Node pre-deployment steps Copy linkLink copied to clipboard!
Before installing the Red Hat Ceph Storage Ceph cluster, perform the following steps to fulfill all the requirements needed.
Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool:
subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0
subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable access for all the nodes in the Ceph cluster for the following repositories:
-
rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpmssubscription-manager repos --disable="*" --enable="rhel-8-for-x86_64-baseos-rpms" --enable="rhel-8-for-x86_64-appstream-rpms"
subscription-manager repos --disable="*" --enable="rhel-8-for-x86_64-baseos-rpms" --enable="rhel-8-for-x86_64-appstream-rpms"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Update the operating system RPMs to the latest version and reboot if needed:
dnf update -y reboot
dnf update -y rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Select a node from the cluster to be your bootstrap node.
ceph1is our bootstrap node in this example going forward.Only on the bootstrap node
ceph1, enable theansible-2.9-for-rhel-8-x86_64-rpmsandrhceph-5-tools-for-rhel-8-x86_64-rpmsrepositories:subscription-manager repos --enable="ansible-2.9-for-rhel-8-x86_64-rpms" --enable="rhceph-5-tools-for-rhel-8-x86_64-rpms"
subscription-manager repos --enable="ansible-2.9-for-rhel-8-x86_64-rpms" --enable="rhceph-5-tools-for-rhel-8-x86_64-rpms"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
hostnameusing the bare/short hostname in all the hosts.hostnamectl set-hostname <short_name>
hostnamectl set-hostname <short_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the hostname configuration for deploying Red Hat Ceph Storage with cephadm.
hostname
$ hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ceph1
ceph1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the long hostname with the
fqdnusing thehostname -foption.hostname -f
$ hostname -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ceph1.example.domain.com
ceph1.example.domain.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: To know more about why these changes are required, see Fully Qualified Domain Names vs Bare Host Names.
Run the following steps on bootstrap node. In our example, the bootstrap node is
ceph1.Install the
cephadm-ansibleRPM package:sudo dnf install -y cephadm-ansible
$ sudo dnf install -y cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo run the ansible playbooks, you must have
sshpasswordless access to all the nodes that are configured to the Red Hat Ceph Storage cluster. Ensure that the configured user (for example,deployment-user) has root privileges to invoke thesudocommand without needing a password.To use a custom key, configure the selected user (for example,
deployment-user) ssh config file to specify the id/key that will be used for connecting to the nodes via ssh:cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF
cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow Build the ansible inventory
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteHere, the Hosts (
Ceph1andCeph4) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as_adminbycephadm. Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node.Verify that
ansiblecan access all nodes using ping module before running the pre-flight playbook.ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b
$ ansible -i /usr/share/cephadm-ansible/inventory -m ping all -bCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Navigate to the
/usr/share/cephadm-ansibledirectory. Run ansible-playbook with relative file paths.
ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
$ ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preflight playbook Ansible playbook configures the RHCS
dnfrepository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location forcephadm-ansibleandcephadm-preflight.ymlis/usr/share/cephadm-ansible. For additional information, see Running the preflight playbook
3.5.2. Cluster bootstrapping and service deployment with Cephadm Copy linkLink copied to clipboard!
The cephadm utility installs and starts a single Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node where the cephadm bootstrap command is run.
In this guide we are going to bootstrap the cluster and deploy all the needed Red Hat Ceph Storage services in one step using a cluster specification yaml file.
If you find issues during the deployment, it may be easier to troubleshoot the errors by dividing the deployment into two steps:
- Bootstrap
- Service deployment
For additional information on the bootstrapping process, see Bootstrapping a new storage cluster.
Procedure
Create json file to authenticate against the container registry using a json file as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cluster-spec.yamlthat adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run following table 3.1.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the IP for the NIC with the Red Hat Ceph Storage public network configured from the bootstrap node. After substituting
10.0.40.0with the subnet that you have defined in your ceph public network, execute the following command.ip a | grep 10.0.40
$ ip a | grep 10.0.40Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
10.0.40.78
10.0.40.78Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
Cephadmbootstrap command as the root user on the node that will be the initial Monitor node in the cluster. TheIP_ADDRESSoption is the node’s IP address that you are using to run thecephadm bootstrapcommand.NoteIf you have configured a different user instead of
rootfor passwordless SSH access, then use the--ssh-user=flag with thecepadm bootstrapcommand.If you are using non default/id_rsa ssh key names, then use
--ssh-private-keyand--ssh-public-keyoptions withcephadmcommand.cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json
$ cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the local node uses fully-qualified domain names (FQDN), then add the
--allow-fqdn-hostnameoption tocephadm bootstrapon the command line.Once the bootstrap finishes, you will see the following output from the previous cephadm bootstrap command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the status of Red Hat Ceph Storage cluster deployment using the Ceph CLI client from ceph1:
ceph -s
$ ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt may take several minutes for all the services to start.
It is normal to get a global recovery event while you don’t have any osds configured.
You can use
ceph orch psandceph orch lsto further check the status of the services.Verify if all the nodes are part of the
cephadmcluster.ceph orch host ls
$ ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can run Ceph commands directly from the host because
ceph1was configured in thecephadm-ansibleinventory as part of the [admin] group. The Ceph admin keys were copied to the host during thecephadm bootstrapprocess.Check the current placement of the Ceph monitor services on the datacenters.
ceph orch ps | grep mon | awk '{print $1 " " $2}'$ ceph orch ps | grep mon | awk '{print $1 " " $2}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7
mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the current placement of the Ceph manager services on the datacenters.
ceph orch ps | grep mgr | awk '{print $1 " " $2}'$ ceph orch ps | grep mgr | awk '{print $1 " " $2}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5
mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the ceph osd crush map layout to ensure that each host has one OSD configured and its status is
UP. Also, double-check that each node is under the right datacenter bucket as specified in table 3.1ceph osd tree
$ ceph osd treeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create and enable a new RDB block pool.
ceph osd pool create rbdpool 32 32 ceph osd pool application enable rbdpool rbd
$ ceph osd pool create rbdpool 32 32 $ ceph osd pool application enable rbdpool rbdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe number 32 at the end of the command is the number of PGs assigned to this pool. The number of PGs can vary depending on several factors like the number of OSDs in the cluster, expected % used of the pool, etc. You can use the following calculator to determine the number of PGs needed: Ceph Placement Groups (PGs) per Pool Calculator.
Verify that the RBD pool has been created.
ceph osd lspools | grep rbdpool
$ ceph osd lspools | grep rbdpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
3 rbdpool
3 rbdpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that MDS services are active and has located one service on each datacenter.
ceph orch ps | grep mds
$ ceph orch ps | grep mdsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9
mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CephFS volume.
ceph fs volume create cephfs
$ ceph fs volume create cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
ceph fs volume createcommand also creates the needed data and meta CephFS pools. For more information, see Configuring and Mounting Ceph File Systems.Check the
Cephstatus to verify how the MDS daemons have been deployed. Ensure that the state is active whereceph6is the primary MDS for this filesystem andceph3is the secondary MDS.ceph fs status
$ ceph fs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that RGW services are active.
ceph orch ps | grep rgw
$ ceph orch ps | grep rgwCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9
rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5.3. Configuring Red Hat Ceph Storage stretch mode Copy linkLink copied to clipboard!
Once the Red Hat Ceph Storage cluster is fully deployed using cephadm, use the following procedure to configure the stretch cluster mode. The new stretch mode is designed to handle the 2-site case.
Procedure
Check the current election strategy being used by the monitors with the ceph mon dump command. By default in a ceph cluster, the connectivity is set to classic.
ceph mon dump | grep election_strategy
ceph mon dump | grep election_strategyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
dumped monmap epoch 9 election_strategy: 1
dumped monmap epoch 9 election_strategy: 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the monitor election to connectivity.
ceph mon set election_strategy connectivity
ceph mon set election_strategy connectivityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the previous ceph mon dump command again to verify the election_strategy value.
ceph mon dump | grep election_strategy
$ ceph mon dump | grep election_strategyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
dumped monmap epoch 10 election_strategy: 3
dumped monmap epoch 10 election_strategy: 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow To know more about the different election strategies, see Configuring monitor election strategy.
Set the location for all our Ceph monitors:
ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3
ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that each monitor has its appropriate location.
ceph mon dump
$ ceph mon dumpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a CRUSH rule that makes use of this OSD crush topology by installing the
ceph-baseRPM package in order to use thecrushtoolcommand:dnf -y install ceph-base
$ dnf -y install ceph-baseCopy to Clipboard Copied! Toggle word wrap Toggle overflow To know more about CRUSH ruleset, see Ceph CRUSH ruleset.
Get the compiled CRUSH map from the cluster:
ceph osd getcrushmap > /etc/ceph/crushmap.bin
$ ceph osd getcrushmap > /etc/ceph/crushmap.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Decompile the CRUSH map and convert it to a text file in order to be able to edit it:
crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt
$ crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following rule to the CRUSH map by editing the text file
/etc/ceph/crushmap.txtat the end of the file.vim /etc/ceph/crushmap.txt
$ vim /etc/ceph/crushmap.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example is applicable for active applications in both OpenShift Container Platform clusters.
NoteThe rule
idhas to be unique. In the example, we only have one more crush rule with id 0 hence we are using id 1. If your deployment has more rules created, then use the next free id.The CRUSH rule declared contains the following information:
Rule name- Description: A unique whole name for identifying the rule.
-
Value:
stretch_rule
id- Description: A unique whole number for identifying the rule.
-
Value:
1
type- Description: Describes a rule for either a storage drive replicated or erasure-coded.
-
Value:
replicated
min_size- Description: If a pool makes fewer replicas than this number, CRUSH will not select this rule.
- Value: 1
max_size- Description: If a pool makes more replicas than this number, CRUSH will not select this rule.
- Value: 10
step take default-
Description: Takes the root bucket called
default, and begins iterating down the tree.
-
Description: Takes the root bucket called
step choose firstn 0 type datacenter- Description: Selects the datacenter bucket, and goes into it’s subtrees.
step chooseleaf firstn 2 type host- Description: Selects the number of buckets of the given type. In this case, it is two different hosts located in the datacenter it entered at the previous level.
step emit- Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule.
Compile the new CRUSH map from the file
/etc/ceph/crushmap.txtand convert it to a binary file called/etc/ceph/crushmap2.bin:crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin
$ crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Inject the new crushmap we created back into the cluster:
ceph osd setcrushmap -i /etc/ceph/crushmap2.bin
$ ceph osd setcrushmap -i /etc/ceph/crushmap2.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
17
17Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe number 17 is a counter and it will increase (18,19, and so on) depending on the changes you make to the crush map.
Verify that the stretched rule created is now available for use.
ceph osd crush rule ls
ceph osd crush rule lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
replicated_rule stretch_rule
replicated_rule stretch_ruleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the stretch cluster mode.
ceph mon enable_stretch_mode ceph7 stretch_rule datacenter
$ ceph mon enable_stretch_mode ceph7 stretch_rule datacenterCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
ceph7is the arbiter node,stretch_ruleis the crush rule we created in the previous step anddatacenteris the dividing bucket.Verify all our pools are using the
stretch_ruleCRUSH rule we have created in our Ceph cluster:for pool in $(rados lspools);do echo -n "Pool: ${pool}; ";ceph osd pool get ${pool} crush_rule;done$ for pool in $(rados lspools);do echo -n "Pool: ${pool}; ";ceph osd pool get ${pool} crush_rule;doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This indicates that a working Red Hat Ceph Storage stretched cluster with arbiter mode is now available.
3.6. Installing OpenShift Data Foundation on managed clusters Copy linkLink copied to clipboard!
In order to configure storage replication between the two OpenShift Container Platform clusters, OpenShift Data Foundation operator must be installed first on each managed cluster as follows:
Prerequisites
- Ensure that you have met the hardware requirements for OpenShift Data Foundation external deployments. For a detailed description of the hardware requirements, see External mode requirements.
Refer to OpenShift Data Foundation deployment guides and instructions that are specific to your infrastructure (AWS, VMware, BM, Azure, etc.).
Procedure
- Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters.
After installing the operator, create a StorageSystem using the option Full deployment type and
Connect with external storage platformwhere your Backing storage type isRed Hat Ceph Storage.For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode.
At a minimum, you must use the following three flags with the
ceph-external-cluster-details-exporter.py script:- --rbd-data-pool-name
-
With the name of the RBD pool that was created during RHCS deployment for OpenShift Container Platform. For example, the pool can be called
rbdpool. - --rgw-endpoint
-
Provide the endpoint in the format
<ip_address>:<port>. It is the RGW IP of the RGW daemon running on the same site as the OpenShift Container Platform cluster that you are configuring. - --run-as-user
- With a different client name for each site.
The following flags are
optionalif default values were used during the RHCS deployment:- --cephfs-filesystem-name
-
With the name of the CephFS filesystem we created during RHCS deployment for OpenShift Container Platform, the default filesystem name is
cephfs. - --cephfs-data-pool-name
-
With the name of the CephFS data pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called
cephfs.data. - --cephfs-metadata-pool-name
-
With the name of the CephFS metadata pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called
cephfs.meta.
Run the following command on the bootstrap node, ceph1, to get the IP for the RGW endpoints in datacenter1 and datacenter2:
ceph orch ps | grep rgw.objectgw
ceph orch ps | grep rgw.objectgwCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp
rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cpCopy to Clipboard Copied! Toggle word wrap Toggle overflow host ceph3 host ceph6
host ceph3 host ceph6Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66
ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the ceph-external-cluster-details-exporter.py with the parameters configured for our first ocp managed cluster cluster1.
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint 10.0.40.24:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.json
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint 10.0.40.24:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the ceph-external-cluster-details-exporter.py with the parameters configured for our first ocp managed cluster cluster2
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint 10.0.40.66:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.json
python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint 10.0.40.66:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save the two files generated in the bootstrap cluster (ceph1)
ocp-cluster1.jsonandocp-cluster2.jsonto your local machine. -
Use the contents of file
ocp-cluster1.jsonon the OCP console oncluster1where external ODF is being deployed. -
Use the contents of file
ocp-cluster2.jsonon the OCP console oncluster2where external ODF is being deployed.
- Review the settings and then select Create StorageSystem.
Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command:
oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{"\n"}'$ oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the Multicloud Gateway (MCG):
oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'$ oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the status result is
Readyfor both queries on the Primary managed cluster and the Secondary managed cluster, then continue with the next step.
In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources and verify that Status of StorageCluster is Ready and has a green tick mark next to it.
3.7. Installing OpenShift Data Foundation Multicluster Orchestrator operator Copy linkLink copied to clipboard!
OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform’s OperatorHub on the Hub cluster.
Procedure
- On the Hub cluster, navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator.
- Click ODF Multicluster Orchestrator tile.
Keep all default settings and click Install.
Ensure that the operator resources are installed in
openshift-operatorsproject and available to all namespaces.NoteThe
ODF Multicluster Orchestratoralso installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency.Verify that the operator Pods are in a
Runningstate. TheOpenShift DR Hub operatoris also installed at the same time inopenshift-operatorsnamespace.oc get pods -n openshift-operators
$ oc get pods -n openshift-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY UP-TO-DATE AVAILABLE AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h
NAME READY UP-TO-DATE AVAILABLE AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Configuring SSL access across clusters Copy linkLink copied to clipboard!
Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets.
If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped.
Procedure
Extract the ingress certificate for the Primary managed cluster and save the output to
primary.crt.oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crt$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the ingress certificate for the Secondary managed cluster and save the output to
secondary.crt.oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crt$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new ConfigMap file to hold the remote cluster’s certificate bundle with filename
cm-clusters-crt.yaml.NoteThere could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the
primary.crtandsecondary.crtfiles that were created before.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ConfigMap on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.
oc create -f cm-clusters-crt.yaml
$ oc create -f cm-clusters-crt.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
configmap/user-ca-bundle created
configmap/user-ca-bundle createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Patch default proxy resource on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.
oc patch proxy cluster --type=merge --patch='{"spec":{"trustedCA":{"name":"user-ca-bundle"}}}'$ oc patch proxy cluster --type=merge --patch='{"spec":{"trustedCA":{"name":"user-ca-bundle"}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
proxy.config.openshift.io/cluster patched
proxy.config.openshift.io/cluster patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Creating Disaster Recovery Policy on Hub cluster Copy linkLink copied to clipboard!
Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution.
The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console.
Prerequisites
- Ensure that there is a minimum set of two managed clusters.
Procedure
On the OpenShift console, navigate to All Clusters.
- Navigate to Data Services and click Data policies.
- Click Create DRPolicy.
-
Enter Policy name. Ensure that each DRPolicy has a unique name (for example:
ocp4perf1-ocp4perf2). - Select two clusters from the list of managed clusters to which this new policy will be associated with.
-
Replication policy is automatically set to
syncbased on the OpenShift clusters selected. - Click Create.
Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created.
NoteReplace <drpolicy_name> with your unique name.
oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{"\n"}'$ oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Succeeded
SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as
Succeeded.Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster.
Get the names of the DRClusters on the Hub cluster.
oc get drclusters
$ oc get drclustersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE ocp4perf1 4m42s ocp4perf2 4m42s
NAME AGE ocp4perf1 4m42s ocp4perf2 4m42sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check S3 access to each bucket created on each managed cluster using this DRCluster validation command.
NoteReplace <drcluster_name> with your unique name.
oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{"\n"}'$ oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Succeeded
SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMake sure to run command for both DRClusters on the Hub cluster.
Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster.
oc get csv,pod -n openshift-dr-system
$ oc get csv,pod -n openshift-dr-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.11.0 Openshift DR Cluster Operator 4.11.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-5564f9d669-f6lbc 2/2 Running 0 5m32s
NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.11.0 Openshift DR Cluster Operator 4.11.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-5564f9d669-f6lbc 2/2 Running 0 5m32sCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also verify that
OpenShift DR Cluster Operatoris installed successfully on the OperatorHub of each managed clusters.Verify that the secret is propagated correctly on the Primary managed cluster and the Secondary managed cluster.
oc get secrets -n openshift-dr-system | grep Opaque
oc get secrets -n openshift-dr-system | grep OpaqueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Match the output with the
s3SecretReffrom the Hub cluster:oc get cm -n openshift-operators ramen-hub-operator-config -oyaml
oc get cm -n openshift-operators ramen-hub-operator-config -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.10. Configure DRClusters for fencing automation Copy linkLink copied to clipboard!
This configuration is required for enabling fencing prior to application failover. In order to prevent writes to the persistent volume from the cluster which is hit by a disaster, OpenShift DR instructs Red Hat Ceph Storage (RHCS) to fence the nodes of the cluster from the RHCS external storage. This section guides you on how to add the IPs or the IP Ranges for the nodes of the DRCluster.
3.10.1. Add node IP addresses to DRClusters Copy linkLink copied to clipboard!
Find the IP addresses for all of the OpenShift nodes in the managed clusters by running this command in the Primary managed cluster and the Secondary managed cluster.
oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}'$ oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Once you have the
IP addressesthen theDRClusterresources can be modified for each managed cluster.Find the DRCluster names on the Hub Cluster.
oc get drcluster
$ oc get drclusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE ocp4perf1 5m35s ocp4perf2 5m35s
NAME AGE ocp4perf1 5m35s ocp4perf2 5m35sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit each DRCluster to add your unique IP addresses after replacing
<drcluster_name>with your unique name.oc edit drcluster <drcluster_name>
$ oc edit drcluster <drcluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
drcluster.ramendr.openshift.io/ocp4perf1 edited
drcluster.ramendr.openshift.io/ocp4perf1 editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
There could be more than six IP addresses.
Modify this DRCluster configuration also for IP addresses on the Secondary managed clusters in the peer DRCluster resource (e.g., ocp4perf2).
3.10.2. Add fencing annotations to DRClusters Copy linkLink copied to clipboard!
Add the following annotations to all the DRCluster resources. These annotations include details needed for the NetworkFence resource created later in these instructions (prior to testing application failover).
Replace <drcluster_name> with your unique name.
oc edit drcluster <drcluster_name>
$ oc edit drcluster <drcluster_name>
Example output:
drcluster.ramendr.openshift.io/ocp4perf1 edited
drcluster.ramendr.openshift.io/ocp4perf1 edited
Make sure to add these annotations for both DRCluster resources (for example: ocp4perf1 and ocp4perf2).
3.11. Create sample application for testing disaster recovery solution Copy linkLink copied to clipboard!
OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for applications that are managed by RHACM. See Managing Applications for more details.
OpenShift Data Foundation DR solution does not support ApplicationSet, which is required for applications that are deployed via ArgoCD.
This solution orchestrates RHACM application placement, using the PlacementRule, when an application is moved between clusters in a DRPolicy for failover or relocation requirements. The following sections detail how to apply a DRPolicy to an application and how to manage the applications placement life-cycle during and after cluster unavailability.
OpenShift users that do not have cluster-admin permissions, see the Knowledge Article on how to assign necessary permissions to an application user for executing disaster recovery actions.
3.11.1. Creating a sample application Copy linkLink copied to clipboard!
In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate, we need a sample application.
Prerequisites
- When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster.
-
Use the sample application called
busyboxas an example. - Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated.
As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions, that belong together, to refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate.
NoteIf unrelated subscriptions refer to the same Placement Rule for placement actions, they will also be DR protected as the DR workflow controls all subscriptions that references the Placement Rule.
Procedure
- On the Hub cluster, navigate to Applications and click Create application.
- Select type as Subscription.
-
Enter your application Name (for example,
busybox) and Namespace (for example,busybox-sample). -
In the Repository location for resources section, select Repository type
Git. Enter the Git repository URL for the sample application, the github Branch and Path where the resources
busyboxPod and PVC will be created.Use the sample application repository as
https://github.com/red-hat-storage/ocm-ramen-sampleswhere the Branch isrelease-4.12and Path isbusybox-odr-metro.Scroll down in the form until you see Deploy application resources only on clusters matching specified labels and then add a label with its value set to the Primary managed cluster name in RHACM cluster list view.
Click Create which is at the top right hand corner.
On the follow-on screen go to the
Topologytab. You should see that there are all Green checkmarks on the application topology.NoteTo get more information, click on any of the topology elements and a window will appear on the right of the topology view.
Validating the sample application deployment.
Now that the
busyboxapplication has been deployed to your preferred Cluster, the deployment can be validated.Login to your managed cluster where
busyboxwas deployed by RHACM.oc get pods,pvc -n busybox-sample
$ oc get pods,pvc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.11.2. Apply DRPolicy to sample application Copy linkLink copied to clipboard!
Prerequisites
- Ensure that both managed clusters referenced in the DRPolicy are reachable. If not, the application will not be DR protected till both clusters are online.
Procedure
- On the Hub cluster go back to the Multicluster Web console, navigate to All Clusters.
- Login to all the clusters listed under All Clusters.
- Navigate to Data Services and then click Data policies.
- Click the Actions menu at the end of DRPolicy to view the list of available actions.
- Click Apply DRPolicy.
When the Apply DRPolicy modal is displayed, select
busyboxapplication and enter PVC label asappname=busybox.NoteWhen multiple placements rules under the same application or more than one application are selected, all PVCs within the application’s namespace will be protected by default.
- Click Apply.
Verify that a
DRPlacementControlorDRPCwas created in thebusybox-samplenamespace on the Hub cluster and that it’s CURRENTSTATE shows asDeployed. This resource is used for both failover and relocate actions for this application.oc get drpc -n busybox-sample
$ oc get drpc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE busybox-placement-1-drpc 6m59s ocp4perf1 Deployed
NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE busybox-placement-1-drpc 6m59s ocp4perf1 DeployedCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
After you apply DRPolicy to the applications, confirm whether the
ClusterDataProtectedis set toTruein the drpc yaml output.
3.11.3. Deleting sample application Copy linkLink copied to clipboard!
You can delete the sample application busybox using the RHACM console.
The instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters.
Procedure
- On the RHACM console, navigate to Applications.
-
Search for the sample application to be deleted (for example,
busybox). - Click the Action Menu (⋮) next to the application you want to delete.
Click Delete application.
When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted.
- Select Remove application related resources checkbox to delete the Subscription and PlacementRule.
- Click Delete. This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on).
In addition to the resources deleted using the RHACM console, the
DRPlacementControlmust also be deleted after deleting thebusyboxapplication.-
Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project
busybox-sample. - Click OpenShift DR Hub Operator and then click DRPlacementControl tab.
-
Click the Action Menu (⋮) next to the
busyboxapplication DRPlacementControl that you want to delete. - Click Delete DRPlacementControl.
- Click Delete.
-
Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project
This process can be used to delete any application with a DRPlacementControl resource.
3.12. Application failover between managed clusters Copy linkLink copied to clipboard!
Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application based.
Prerequisites
When the primary cluster is in a state other than
Ready, check the actual status of the cluster as it might take some time to update.- Navigate to the RHACM console → Infrastructure → Clusters → Cluster list tab.
Check the status of both the managed clusters individually before performing failover operation.
However, failover operation can still be performed when the cluster you are failing over to is in a Ready state.
-
In order to failover the OpenShift cluster where the application is currently running all applications must be fenced from communicating with the external OpenShift Data Foundation external storage cluster. This is required to prevent simultaneous writes to the same persistent volume from both managed clusters. The OpenShift cluster to
Fenceis the one where the applications are currently running.
Procedure
Enable fencing on the Hub cluster.
Open CLI terminal and edit the DRCluster resource.
ImportantOnce the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example:
CreateContainerError,CrashLoopBackOff) on the cluster that is now fenced.NoteReplace <drcluster_name> with your unique name.
oc edit drcluster <drcluster_name>
$ oc edit drcluster <drcluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
drcluster.ramendr.openshift.io/ocp4perf1 edited
drcluster.ramendr.openshift.io/ocp4perf1 editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the fencing status on the Hub cluster for the Primary managed cluster.
NoteReplace <drcluster_name> with your unique name.
oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{"\n"}'$ oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Fenced
FencedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist.
ceph osd blocklist ls
$ ceph osd blocklist lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- On the Hub cluster, navigate to Applications.
- Click the Actions menu at the end of application row to view the list of available actions.
- Click Failover application.
- When the Failover application popup is shown, select policy and target cluster to which the associated application will failover in case of a disaster.
- By default, the subscription group that will replicate the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting.
Check the status of the Failover readiness.
-
If the status is
Readywith a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. -
If the status is
UnknownorNot ready, then wait until the status changes toReady.
-
If the status is
- Click Initiate. The busybox resources are now created on the target cluster.
- Close the modal window and track the status using the Data policy column on the Applications page.
Verify that the activity status shows as FailedOver for the application.
- Navigate to the Applications → Overview tab.
- In the Data policy column, click the policy link for the application you applied the policy to.
- On the Data Policies modal page, click the View more details link.
3.13. Relocating an application between managed clusters Copy linkLink copied to clipboard!
Relocate an application to its preferred location when all managed clusters are available.
Prerequisite
When primary cluster is in a state other than Ready, check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running.
- Navigate to RHACM console → Infrastructure → Clusters → Cluster list tab.
- Check the status of both the managed clusters individually before performing relocate operation.
- Verify that applications were cleaned up from the cluster before unfencing it.
Procedure
Disable fencing on the Hub cluster.
Edit the DRCluster resource for this cluster.
NoteReplace <drcluster_name> with your unique name.
oc edit drcluster <drcluster_name>
$ oc edit drcluster <drcluster_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
drcluster.ramendr.openshift.io/ocp4perf1 edited
drcluster.ramendr.openshift.io/ocp4perf1 editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Gracefully reboot OpenShift Container Platform nodes that were
Fenced. A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully.NoteMake sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes.
After all OpenShift nodes are rebooted and are in a
Readystatus, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced).oc get pods -A | egrep -v 'Running|Completed'
oc get pods -A | egrep -v 'Running|Completed'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAMESPACE NAME READY STATUS RESTARTS AGE
NAMESPACE NAME READY STATUS RESTARTS AGECopy to Clipboard Copied! Toggle word wrap Toggle overflow The output for this query should be zero Pods before proceeding to the next step.
ImportantIf there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy.
Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage → Data Foundation.
Verify that the
Unfencedcluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary managed cluster.NoteReplace <drcluster_name> with your unique name.
oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{"\n"}'$ oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Unfenced
UnfencedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist.
ceph osd blocklist ls
$ ceph osd blocklist lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that you do not see the IPs added during fencing.
- On the Hub cluster, navigate to Applications.
- Click the Actions menu at the end of application row to view the list of available actions.
- Click Relocate application.
- When the Relocate application popup is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster.
- By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting.
Check the status of the Relocation readiness.
-
If the status is
Readywith a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. -
If the status is
UnknownorNot ready, then wait until the status changes toReady.
-
If the status is
- Click Initiate. The busybox resources are now created on the target cluster.
- Close the modal window and track the status using the Data policy column on the Applications page.
Verify that the activity status shows as Relocated for the application.
- Navigate to the Applications → Overview tab.
- In the Data policy column, click the policy link for the application you applied the policy to.
- On the Data Policies modal page, click the View more details link.
Chapter 4. Regional-DR solution for OpenShift Data Foundation [Technology Preview] Copy linkLink copied to clipboard!
Configuring OpenShift Data Foundation for Regional-DR with Advanced Cluster Management is a Technology Preview feature and is subject to Technology Preview support limitations. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information, see Technology Preview Features Support Scope.
4.1. Components of Regional-DR solution Copy linkLink copied to clipboard!
Regional-DR is composed of Red Hat Advanced Cluster Management for Kubernetes and OpenShift Data Foundation components to provide application and data mobility across Red Hat OpenShift Container Platform clusters.
Red Hat Advanced Cluster Management for Kubernetes
Red Hat Advanced Cluster Management (RHACM))provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment.
RHACM is split into two parts:
- RHACM Hub: includes components that run on the multi-cluster control plane.
- Managed clusters: includes components that run on the clusters that are managed.
For more information about this product, see RHACM documentation and the RHACM “Manage Applications” documentation.
OpenShift Data Foundation
OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster.
OpenShift Data Foundation is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack. Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications.
OpenShift Data Foundation stack is now enhanced with the following abilities for disaster recovery:
- Enable RBD block pools for mirroring across OpenShift Data Foundation instances (clusters)
- Ability to mirror specific images within an RBD block pool
- Provides csi-addons to manage per Persistent Volume Claim (PVC) mirroring
OpenShift DR
OpenShift DR is a set of orchestrators to configure and manage stateful applications across a set of peer OpenShift clusters which are managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application’s state on Persistent Volumes. These include:
- Protecting an application and its state relationship across OpenShift clusters
- Failing over an application and its state to a peer cluster
- Relocate an application and its state to the previously deployed cluster
OpenShift DR is split into three components:
- ODF Multicluster Orchestrator: Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships
- OpenShift DR Hub Operator: Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications.
- OpenShift DR Cluster Operator: Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application.
4.2. Regional-DR deployment workflow Copy linkLink copied to clipboard!
This section provides an overview of the steps required to configure and deploy Regional-DR capabilities using latest version of Red Hat OpenShift Data Foundation across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Red Hat Advanced Cluster Management (RHACM).
To configure your infrastructure, perform the below steps in the order given:
- Ensure requirements across the three: Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Regional-DR.
- Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Creating OpenShift Data Foundation cluster on managed clusters.
- Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster.
- Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters.
Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster.
NoteThere can be more than a single policy.
For testing your disaster recovery solution:
- Create a sample application using RHACM console. See Creating sample application.
- Test failover and relocate operations using the sample application between managed clusters. See application failover and relocating an application.
4.3. Requirements for enabling Regional-DR Copy linkLink copied to clipboard!
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:
You must have three OpenShift clusters that have network reachability between them:
- Hub cluster where Red Hat Advanced Cluster Management for Kubernetes (RHACM operator) is installed.
- Primary managed cluster where OpenShift Data Foundation is installed.
- Secondary managed cluster where OpenShift Data Foundation is installed.
Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions.
After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect.
ImportantIt is the user’s responsibility to ensure that application traffic routing and redirection are configured appropriately. Configuration and updates to the application traffic routes are currently not supported.
- On the Hub cluster, navigate to All Clusters → Infrastructure → Clusters. Ensure that you have either imported or created the Primary managed cluster and the Secondary managed cluster using the RHACM console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster.
The managed clusters must have non-overlapping networks.
To connect the managed OpenShift cluster and service networks using the Submariner add-ons, you need to validate that the two clusters have non-overlapping networks by running the following commands for each of the managed clusters.
oc get networks.config.openshift.io cluster -o json | jq .spec
$ oc get networks.config.openshift.io cluster -o json | jq .specCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for Primary cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for Secondary cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Submariner add-ons documentation.
Ensure that the Managed clusters can connect using
Submariner add-ons. After identifying and ensuring that the cluster and service networks have non-overlapping ranges, install theSubmariner add-onsfor each managed cluster using the RHACM console andCluster sets. For instructions, see Submariner documentation.ImportantDo not select Enable Globalnet because of overlapping cluster and service networks for the managed clusters. Using
Globalnetis not supported with Regional Disaster Recovery currently. Ensure that cluster and service networks are non-overlapping before proceeding.
4.4. Creating an OpenShift Data Foundation cluster on managed clusters Copy linkLink copied to clipboard!
In order to configure storage replication between the two OpenShift Container Platform clusters, create an OpenShift Data Foundation storage system after you install the OpenShift Data Foundation operator.
Refer to OpenShift Data Foundation deployment guides and instructions that are specific to your infrastructure (AWS, VMware, BM, Azure, etc.).
Procedure
Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters.
For information about the OpenShift Data Foundation deployment, refer to your infrastructure specific deployment guides (for example, AWS, VMware, Bare metal, Azure).
Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command:
oc get storagecluster -n openshift-storage ocs-storagecluster -o jsonpath='{.status.phase}{"\n"}'$ oc get storagecluster -n openshift-storage ocs-storagecluster -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the Multicloud Gateway (MCG):
oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'$ oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the status result is
Readyfor both queries on the Primary managed cluster and the Secondary managed cluster, then continue with the next step.
In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources and verify that Status of StorageCluster is Ready and has a green tick mark next to it.
4.5. Installing OpenShift Data Foundation Multicluster Orchestrator operator Copy linkLink copied to clipboard!
OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform’s OperatorHub on the Hub cluster.
Procedure
- On the Hub cluster, navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator.
- Click ODF Multicluster Orchestrator tile.
Keep all default settings and click Install.
Ensure that the operator resources are installed in
openshift-operatorsproject and available to all namespaces.NoteThe
ODF Multicluster Orchestratoralso installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency.Verify that the operator Pods are in a
Runningstate. TheOpenShift DR Hub operatoris also installed at the same time inopenshift-operatorsnamespace.oc get pods -n openshift-operators
$ oc get pods -n openshift-operatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY UP-TO-DATE AVAILABLE AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h
NAME READY UP-TO-DATE AVAILABLE AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Configuring SSL access across clusters Copy linkLink copied to clipboard!
Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets.
If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped.
Procedure
Extract the ingress certificate for the Primary managed cluster and save the output to
primary.crt.oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crt$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the ingress certificate for the Secondary managed cluster and save the output to
secondary.crt.oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crt$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new ConfigMap file to hold the remote cluster’s certificate bundle with filename
cm-clusters-crt.yaml.NoteThere could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the
primary.crtandsecondary.crtfiles that were created before.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ConfigMap on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.
oc create -f cm-clusters-crt.yaml
$ oc create -f cm-clusters-crt.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
configmap/user-ca-bundle created
configmap/user-ca-bundle createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Patch default proxy resource on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.
oc patch proxy cluster --type=merge --patch='{"spec":{"trustedCA":{"name":"user-ca-bundle"}}}'$ oc patch proxy cluster --type=merge --patch='{"spec":{"trustedCA":{"name":"user-ca-bundle"}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
proxy.config.openshift.io/cluster patched
proxy.config.openshift.io/cluster patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7. Creating Disaster Recovery Policy on Hub cluster Copy linkLink copied to clipboard!
Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution.
The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console.
On the initial run, VolSync operator is installed automatically. VolSync is used to setup volume replication between two clusters to protect CephFs-based PVCs. The replication feature is enabled by default.
Prerequisites
- Ensure that there is a minimum set of two managed clusters.
Procedure
On the OpenShift console, navigate to All Clusters.
- Navigate to Data Services and click Data policies.
- Click Create DRPolicy.
-
Enter Policy name. Ensure that each DRPolicy has a unique name (for example:
ocp4bos1-ocp4bos2-5m). - Select two clusters from the list of managed clusters to which this new policy will be associated with.
-
Replication policy is automaticaly set to
Asynchronous(async) based on the OpenShift clusters selected and a Sync schedule option will become available. Set Sync schedule.
ImportantFor every desired replication interval a new DRPolicy must be created with a unique name (such as:
ocp4bos1-ocp4bos2-10m). The same clusters can be selected but the Sync schedule can be configured with a different replication interval in minutes/hours/days. The minimum is one minute.- Click Create.
Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created.
NoteReplace <drpolicy_name> with your unique name.
oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{"\n"}'$ oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Succeeded
SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as
Succeeded.Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster.
Get the names of the DRClusters on the Hub cluster.
oc get drclusters
$ oc get drclustersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE ocp4bos1 4m42s ocp4bos2 4m42s
NAME AGE ocp4bos1 4m42s ocp4bos2 4m42sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check S3 access to each bucket created on each managed cluster using this DRCluster validation command.
NoteReplace <drcluster_name> with your unique name.
oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{"\n"}'$ oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Succeeded
SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteMake sure to run command for both DRClusters on the Hub cluster.
Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster.
oc get csv,pod -n openshift-dr-system
$ oc get csv,pod -n openshift-dr-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.11.0 Openshift DR Cluster Operator 4.11.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-5564f9d669-f6lbc 2/2 Running 0 5m32s
NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.11.0 Openshift DR Cluster Operator 4.11.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-5564f9d669-f6lbc 2/2 Running 0 5m32sCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also verify that
OpenShift DR Cluster Operatoris installed successfully on the OperatorHub of each managed clusters.Verify that the status of the ODF mirroring
daemonhealth on the Primary managed cluster and the Secondary managed cluster.oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'$ oc get cephblockpool ocs-storagecluster-cephblockpool -n openshift-storage -o jsonpath='{.status.mirroringStatus.summary}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
{"daemon_health":"OK","health":"OK","image_health":"OK","states":{}}{"daemon_health":"OK","health":"OK","image_health":"OK","states":{}}Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIt could take up to 10 minutes for the
daemon_healthandhealthto go from Warning to OK. If the status does not become OK eventually then use the RHACM console to verify that the Submariner connection between managed clusters is still in a healthy state. Do not proceed until all values are OK.When using
VolSyncto protect CephFs-based PVCs, then configure theVolSynccopy method. The default copy method is to use snapshot. A snapshot is taken at the source and synced to the temporary destination PVC. Once the syncronization is complete, another snapshot is taken from this temporary PVC and saved on the destination cluster. On failover, the application PVC is restored from the latest snapshot found on the cluster.Using a snapshot as a copy method may not be desirable when using PVCs that contain thousands of files as CephFS will take a long time to create a writable PVC from snapshot. Furthermore, when using the copy method as snapshot, after a failover or replication, the entire PVC must be syncronized to the other side. This is a very expensive operation on high latency network and big PVC size.
To avoid these issues, a “direct” copy method can be used instead. This method is preferred as synchronization is done directly to the application PVC, and a snapshot is also saved in case manual restoration is required.
You can configure the copy method “direct” as follows:
oc edit cm -n openshift-operators ramen-hub-operator-config
$ oc edit cm -n openshift-operators ramen-hub-operator-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following to
spec.data.ramen_manager_config.yamlsection:volsync: destinationCopyMethod: Directvolsync: destinationCopyMethod: DirectCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8. Create sample application for testing disaster recovery solution Copy linkLink copied to clipboard!
OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for applications that are managed by RHACM. See Managing Applications for more details.
OpenShift Data Foundation DR solution does not support ApplicationSet, which is required for applications that are deployed via ArgoCD.
ODF DR orchestrates RHACM application placement, using the PlacementRule, when an application is moved between clusters in a DRPolicy for failover or relocation requirements. The following sections detail how to apply a DRPolicy to an application and how to manage the applications placement life-cycle during and after cluster unavailability.
OpenShift users that do not have cluster-admin permissions, see the Knowledge Article on how to assign necessary permissions to an application user for executing disaster recovery actions.
4.8.1. Creating a sample application Copy linkLink copied to clipboard!
In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate, we need a sample application.
Prerequisites
- When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster.
-
Use the sample application called
busyboxas an example. - Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated.
As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions, that belong together, to refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate.
NoteIf unrelated subscriptions refer to the same Placement Rule for placement actions, they will also be DR protected as the DR workflow controls all subscriptions that references the Placement Rule.
Procedure
- On the Hub cluster, navigate to Applications and click Create application.
- Select type as Subscription.
-
Enter your application Name (for example,
busybox) and Namespace (for example,busybox-sample). -
In the Repository location for resources section, select Repository type
Git. Enter the Git repository URL for the sample application, the github Branch and Path where the resources
busyboxPod and PVC will be created.-
Use the sample application repository as
https://github.com/red-hat-storage/ocm-ramen-samples -
Select Branch as
release-4.12. Choose one of the following Path
-
busybox-odrto use RBD Regional-DR. -
busybox-odr-cephfsto use CephFS Regional-DR.
-
-
Use the sample application repository as
Scroll down in the form until you see Deploy application resources only on clusters matching specified labels and then add a label with its value set to the Primary managed cluster name in RHACM cluster list view.
Click Create which is at the top right hand corner.
On the follow-on screen go to the
Topologytab. You should see that there are all Green checkmarks on the application topology.NoteTo get more information, click on any of the topology elements and a window will appear on the right of the topology view.
Validating the sample application deployment.
Now that the
busyboxapplication has been deployed to your preferred Cluster, the deployment can be validated.Login to your managed cluster where
busyboxwas deployed by RHACM.oc get pods,pvc -n busybox-sample
$ oc get pods,pvc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.2. Apply DRPolicy to sample application Copy linkLink copied to clipboard!
Prerequisites
- Ensure that both managed clusters referenced in the DRPolicy are reachable. If not, the application will not be DR protected till both clusters are online.
Procedure
- On the Hub cluster go back to the Multicluster Web console, navigate to All Clusters.
- Login to all the clusters listed under All Clusters.
- Navigate to Data Services and then click Data policies.
- Click the Actions menu at the end of DRPolicy to view the list of available actions.
- Click Apply DRPolicy.
When the Apply DRPolicy modal is displayed, select
busyboxapplication and enter PVC label asappname=busybox.NoteWhen multiple placements rules under the same application or more than one application are selected, all PVCs within the application’s namespace will be protected by default.
- Click Apply.
Verify that a
DRPlacementControlorDRPCwas created in thebusybox-samplenamespace on the Hub cluster and that it’s CURRENTSTATE shows asDeployed. This resource is used for both failover and relocate actions for this application.oc get drpc -n busybox-sample
$ oc get drpc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE busybox-placement-1-drpc 6m59s ocp4bos1 Deployed
NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE busybox-placement-1-drpc 6m59s ocp4bos1 DeployedCopy to Clipboard Copied! Toggle word wrap Toggle overflow [Optional] Verify Rados block device (RBD)
volumereplicationandvolumereplicationgroupon the primary cluster.oc get volumereplications.replication.storage.openshift.io
$ oc get volumereplications.replication.storage.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE busybox-pvc 2d16h rbd-volumereplicationclass-1625360775 busybox-pvc primary Primary
NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE busybox-pvc 2d16h rbd-volumereplicationclass-1625360775 busybox-pvc primary PrimaryCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get volumereplicationgroups.ramendr.openshift.io
$ oc get volumereplicationgroups.ramendr.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME DESIREDSTATE CURRENTSTATE busybox-drpc primary Primary
NAME DESIREDSTATE CURRENTSTATE busybox-drpc primary PrimaryCopy to Clipboard Copied! Toggle word wrap Toggle overflow [Optional] Verify CephFS volsync replication source has been setup successfully in the primary cluster and VolSync ReplicationDestination has been setup in the failover cluster.
oc get replicationsource -n busybox-sample
$ oc get replicationsource -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME SOURCE LAST SYNC DURATION NEXT SYNC busybox-pvc busybox-pvc 2022-12-20T08:46:07Z 1m7.794661104s 2022-12-20T08:50:00Z
NAME SOURCE LAST SYNC DURATION NEXT SYNC busybox-pvc busybox-pvc 2022-12-20T08:46:07Z 1m7.794661104s 2022-12-20T08:50:00ZCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get replicationdestination -n busybox-sample
$ oc get replicationdestination -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME LAST SYNC DURATION NEXT SYNC busybox-pvc 2022-12-20T08:46:32Z 4m39.52261108s
NAME LAST SYNC DURATION NEXT SYNC busybox-pvc 2022-12-20T08:46:32Z 4m39.52261108sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.3. Deleting sample application Copy linkLink copied to clipboard!
You can delete the sample application busybox using the RHACM console.
The instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters.
Procedure
- On the RHACM console, navigate to Applications.
-
Search for the sample application to be deleted (for example,
busybox). - Click the Action Menu (⋮) next to the application you want to delete.
Click Delete application.
When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted.
- Select Remove application related resources checkbox to delete the Subscription and PlacementRule.
- Click Delete. This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on).
In addition to the resources deleted using the RHACM console, the
DRPlacementControlmust also be deleted after deleting thebusyboxapplication.-
Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project
busybox-sample. - Click OpenShift DR Hub Operator and then click DRPlacementControl tab.
-
Click the Action Menu (⋮) next to the
busyboxapplication DRPlacementControl that you want to delete. - Click Delete DRPlacementControl.
- Click Delete.
-
Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project
This process can be used to delete any application with a DRPlacementControl resource.
4.9. Application failover between managed clusters Copy linkLink copied to clipboard!
Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application based.
Prerequisites
When the primary cluster is in a state other than
Ready, check the actual status of the cluster as it might take some time to update.- Navigate to the RHACM console → Infrastructure → Clusters → Cluster list tab.
Check the status of both the managed clusters individually before performing failover operation.
However, failover operation can still be performed when the cluster you are failing over to is in a Ready state.
Procedure
- On the Hub cluster, navigate to Applications.
- Click the Actions menu at the end of application row to view the list of available actions.
- Click Failover application.
- When the Failover application popup is shown, select policy and target cluster to which the associated application will failover in case of a disaster.
- By default, the subscription group that will replicate the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting.
Check the status of the Failover readiness.
-
If the status is
Readywith a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. -
If the status is
UnknownorNot ready, then wait until the status changes toReady.
-
If the status is
- Click Initiate. The busybox resources are now created on the target cluster.
- Close the modal window and track the status using the Data policy column on the Applications page.
Verify that the activity status shows as FailedOver for the application.
- Navigate to the Applications → Overview tab.
- In the Data policy column, click the policy link for the application you applied the policy to.
- On the Data Policies modal page, click the View more details link.
- Verify that you can see one or more policy names and the ongoing activities (Last sync time and Activity status) associated with the policy in use with the application.
4.10. Relocating an application between managed clusters Copy linkLink copied to clipboard!
Relocate an application to its preferred location when all managed clusters are available.
Prerequisite
When primary cluster is in a state other than Ready, check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running.
- Navigate to RHACM console → Infrastructure → Clusters → Cluster list tab.
- Check the status of both the managed clusters individually before performing relocate operation.
- Relocate performed when last sync time is closer to current time would be preferred as the time taken to relocate would be lower, considering amount of data changed between last sync time and now is proportionally smaller.
- Verify that applications were cleaned up from the cluster before unfencing it.
Procedure
- On the Hub cluster, navigate to Applications.
- Click the Actions menu at the end of application row to view the list of available actions.
- Click Relocate application.
- When the Relocate application popup is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster.
- By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting.
Check the status of the Relocation readiness.
-
If the status is
Readywith a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. -
If the status is
UnknownorNot ready, then wait until the status changes toReady.
-
If the status is
- Click Initiate. The busybox resources are now created on the target cluster.
- Close the modal window and track the status using the Data policy column on the Applications page.
Verify that the activity status shows as Relocated for the application.
- Navigate to the Applications → Overview tab.
- In the Data policy column, click the policy link for the application you applied the policy to.
- On the Data Policies modal page, click the View more details link.
- Verify that you can see one or more policy names and the ongoing activities (Last sync time and Activity status) associated with the policy in use with the application.
4.11. Viewing Recovery Point Objective values for disaster recovery enabled applications Copy linkLink copied to clipboard!
Recovery Point Objective (RPO) value is the most recent sync time of persistent data from the cluster where the application is currently active to its peer. This sync time helps determine duration of data lost during failover.
This RPO value is applicable only for Regional-DR during failover. Relocation ensures there is no data loss during the operation, as all peer clusters are available.
You can view the Recovery Point Objective (RPO) value of all the protected volumes for their workload on the Hub cluster.
Procedure
- On the Hub cluster, navigate to Applications → Overview tab.
In the Data policy column, click the policy link for the application you applied the policy to.
A Data Policies modal page appears with the number of disaster recovery policies applied to each application along with failover and relocation status.
On the Data Policies modal page, click the View more details link.
A detailed Data Policies modal page is displayed that shows the policy names and the ongoing activities (Last sync, Activity status) associated with the policy that is applied to the application.
The Last sync time reported in the modal page, represents the most recent sync time of all volumes that are DR protected for the application.
Chapter 5. Troubleshooting disaster recovery Copy linkLink copied to clipboard!
5.1. Troubleshooting Metro-DR Copy linkLink copied to clipboard!
5.1.1. A statefulset application stuck after failover Copy linkLink copied to clipboard!
- Problem
While relocating to a preferred cluster, DRPlacementControl is stuck reporting PROGRESSION as "MovingToSecondary".
Previously, before Kubernetes v1.23, the Kubernetes control plane never cleaned up the PVCs created for StatefulSets. This activity was left to the cluster administrator or a software operator managing the StatefulSets. Due to this, the PVCs of the StatefulSets were left untouched when their Pods are deleted. This prevents Ramen from relocating an application to its preferred cluster.
- Resolution
If the workload uses StatefulSets, and relocation is stuck with PROGRESSION as "MovingToSecondary", then run:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each bounded PVC for that namespace that belongs to the StatefulSet, run
oc delete pvc <pvcname> -n namespace
$ oc delete pvc <pvcname> -n namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once all PVCs are deleted, Volume Replication Group (VRG) transitions to secondary, and then gets deleted.
Run the following command
oc get drpc -n <namespace> -o wide
$ oc get drpc -n <namespace> -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow After a few seconds to a few minutes, the PROGRESSION reports "Completed" and relocation is complete.
- Result
- The workload is relocated to the preferred cluster
BZ reference: [2118270]
5.1.2. DR policies protect all applications in the same namespace Copy linkLink copied to clipboard!
- Problem
-
While only single application is selected to be used by a DR policy, all applications in the same namespace will be protected. This results in PVCs, that match the
DRPlacementControlspec.pvcSelectoracross multiple workloads or if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individualDRPlacementControlactions. - Resolution
-
Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl
spec.pvcSelectorto disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify thespec.pvcSelectorfield for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line.
BZ reference: [2111163]
5.1.3. During failback of an application stuck in Relocating state Copy linkLink copied to clipboard!
- Problem
-
This issue might occur after performing failover and failback of an application (all nodes or cluster are up). When performing failback application stuck in the
Relocatingstate with a message ofWaitingfor PV restore to complete. - Resolution
- Use S3 client or equivalent to clean up the duplicate PV objects from the s3 store. Keep only the one that has a timestamp closer to the failover or relocate time.
BZ reference: [2120201]
5.1.4. Unable to apply DRPolicy to the subscription workload with RHACM 2.8 Copy linkLink copied to clipboard!
- Problem
-
The Red Hat Advanced Cluster Management (RHACM) 2.8 console has deprecated the
PlacementRuletype and moved to thePlacement typefor Subscription applications. So, when a user creates a Subscription application using RHACM 2.8 console, the application is created with Placement only. Since OpenShift Data Foundation 4.12 disaster recovery user interface and Ramen operator do not support Placement for Subscription applications, the Disaster Recovery user interface is unable to detect the applications and display the details for assigning a policy. - Resolution
Since the RHACM 2.8 console is still able to detect the
PlacementRulewhich is created using command-line interface (CLI), do the following steps for creating the Subscription application in RHACM 2.8 withPlacementRule:-
Create a new project with application namespace. (for example:
busybox-application) -
Find the label for your managed cluster where you want to deploy the application. (for example,
drcluster1-jul-6) Create a
PlacementRuleCR on theapplication-namespacewith managed cluster label which was created in the previous step:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
While creating the application using the RHACM console on the Subscription application page, choose this new
PlacementRule. -
Delete the
PlacementRulefrom the YAML editor, so it can re-use the chosen one.
-
Create a new project with application namespace. (for example:
BZ reference: [2216190]
5.2. Troubleshooting Regional-DR Copy linkLink copied to clipboard!
5.2.1. RBD mirroring scheduling is getting stopped for some images Copy linkLink copied to clipboard!
- Problem
There are a few common causes for RBD mirroring scheduling getting stopped for some images.
After marking the applications for mirroring, for some reason, if it is not replicated, use the toolbox pod and run the following command to see which image scheduling is stopped.
rbd snap ls <poolname/imagename> –all
$ rbd snap ls <poolname/imagename> –allCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resolution
- Restart the manager daemon on the primary cluster
- Disable and immediately re-enable mirroring on the affected images on the primary cluster
5.2.2. rbd-mirror daemon health is in warning state Copy linkLink copied to clipboard!
- Problem
There appears to be numerous cases where WARNING gets reported if mirror service
::get_mirror_service_statuscallsCephmonitor to get service status forrbd-mirror.Following a network disconnection,
rbd-mirrordaemon health is in thewarningstate while the connectivity between both the managed clusters is fine.- Resolution
Run the following command in the toolbox and look for
leader:falserbd mirror pool status --verbose ocs-storagecluster-cephblockpool | grep 'leader:'
rbd mirror pool status --verbose ocs-storagecluster-cephblockpool | grep 'leader:'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you see the following in the output:
leader: falseIt indicates that there is a daemon startup issue and the most likely root cause could be due to problems reliably connecting to the secondary cluster.
Workaround: Move the
rbd-mirrorpod to a different node by simply deleting the pod and verify that it has been rescheduled on another node.leader: trueor no output
BZ reference: [2118627]
5.2.3. A statefulset application stuck after failover Copy linkLink copied to clipboard!
- Problem
While relocating to a preferred cluster, DRPlacementControl is stuck reporting PROGRESSION as "MovingToSecondary".
Previously, before Kubernetes v1.23, the Kubernetes control plane never cleaned up the PVCs created for StatefulSets. This activity was left to the cluster administrator or a software operator managing the StatefulSets. Due to this, the PVCs of the StatefulSets were left untouched when their Pods are deleted. This prevents Ramen from relocating an application to its preferred cluster.
- Resolution
If the workload uses StatefulSets, and relocation is stuck with PROGRESSION as "MovingToSecondary", then run:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each bounded PVC for that namespace that belongs to the StatefulSet, run
oc delete pvc <pvcname> -n namespace
$ oc delete pvc <pvcname> -n namespaceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once all PVCs are deleted, Volume Replication Group (VRG) transitions to secondary, and then gets deleted.
Run the following command
oc get drpc -n <namespace> -o wide
$ oc get drpc -n <namespace> -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow After a few seconds to a few minutes, the PROGRESSION reports "Completed" and relocation is complete.
- Result
- The workload is relocated to the preferred cluster
BZ reference: [2118270]
5.2.4. Application is not running after failover Copy linkLink copied to clipboard!
- Problem
-
After failing over an application, workload pods do not reach running state with errors
MountVolume.MountDevice failed for volume <PV name> : rpc error: code = Internal desc = fail to check rbd image status: (cannot map image <image description> it is not primary)
Execute these steps on the cluster where the workload is being failed over to.
- Resolution
Scale down the RBD mirror daemon deployment to
0until the application pods can recover from the above error.oc scale deployment rook-ceph-rbd-mirror-a -n openshift-storage --replicas=0
$ oc scale deployment rook-ceph-rbd-mirror-a -n openshift-storage --replicas=0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Post recovery, scale the RBD mirror daemon deployment back to
1.oc scale deployment rook-ceph-rbd-mirror-a -n openshift-storage --replicas=1
$ oc scale deployment rook-ceph-rbd-mirror-a -n openshift-storage --replicas=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
BZ reference: [2134936]
5.2.5. volsync-rsync-src pods are in error state Copy linkLink copied to clipboard!
- Problem
volsync-rsync-srcpods are in error state as they are unable to connect tovolsync-rsync-dst. TheVolSyncsource pod logs might exhibit persistent error messages over an extended duration similar to the log snippet.Run the following command to check the logs.
oc logs volsync-rsync-src-<app pvc name>-<suffix>
$ oc logs volsync-rsync-src-<app pvc name>-<suffix>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-busybox-pvc-9.busybox-workloads-1.svc.clusterset.local:22 Syncronization failed. Retrying in 2 seconds. Retry 1/5. rsync: connection unexpectedly closed (7 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.3]
VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-busybox-pvc-9.busybox-workloads-1.svc.clusterset.local:22 Syncronization failed. Retrying in 2 seconds. Retry 1/5. rsync: connection unexpectedly closed (7 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.3]Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Resolution
You can reconfigure the Maximum Transmission Unit (MTU) size to fix this issue using the following steps:
Annotate the nodes which have submariner gateway labels.
oc annotate node -l submariner.io/gateway submariner.io/tcp-clamp-mss=1340 --overwrite
$ oc annotate node -l submariner.io/gateway submariner.io/tcp-clamp-mss=1340 --overwriteCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
node/compute-0 annotated node/compute-2 annotated
node/compute-0 annotated node/compute-2 annotatedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete submariner route agent pods.
oc delete pods -n submariner-operator -l app=submariner-routeagent
$ oc delete pods -n submariner-operator -l app=submariner-routeagentCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check for any error in the
vol-sync-src pod.oc logs volsync-rsync-src-dd-io-pvc-3-nwn8h
$ oc logs volsync-rsync-src-dd-io-pvc-3-nwn8hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-3.busybox-workloads-8.svc.clusterset.local:22 … .d..tp..... ./ <f+++++++++ 07-12-2022_13-03-04-dd-io-3-5d6b4b84df-v9bhc
VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-3.busybox-workloads-8.svc.clusterset.local:22 … .d..tp..... ./ <f+++++++++ 07-12-2022_13-03-04-dd-io-3-5d6b4b84df-v9bhcCopy to Clipboard Copied! Toggle word wrap Toggle overflow
BZ reference: [2136864]
5.2.6. volsync-rsync-src pod is in error state as it is unable to resolve the destination hostname Copy linkLink copied to clipboard!
- Problem
VolSyncsource pod is unable to resolve the hostname of the VolSync destination pod. The log of the VolSync Pod consistently shows an error message over an extended period of time similar to the following log snippet.oc logs -n busybox-workloads-3-2 volsync-rsync-src-dd-io-pvc-1-p25rz
$ oc logs -n busybox-workloads-3-2 volsync-rsync-src-dd-io-pvc-1-p25rzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local:22 ... ssh: Could not resolve hostname volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local: Name or service not known
VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local:22 ... ssh: Could not resolve hostname volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local: Name or service not knownCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resolution
Restart
submariner-lighthouse-agenton both nodes.oc delete pod -l app=submariner-lighthouse-agent -n submariner-operator
$ oc delete pod -l app=submariner-lighthouse-agent -n submariner-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.7. Unable to apply DRPolicy to the subscription workload with RHACM 2.8 Copy linkLink copied to clipboard!
- Problem
-
The Red Hat Advanced Cluster Management (RHACM) 2.8 console has deprecated the
PlacementRuletype and moved to thePlacement typefor Subscription applications. So, when a user creates a Subscription application using RHACM 2.8 console, the application is created with Placement only. Since OpenShift Data Foundation 4.12 disaster recovery user interface and Ramen operator do not support Placement for Subscription applications, the Disaster Recovery user interface is unable to detect the applications and display the details for assigning a policy. - Resolution
Since the RHACM 2.8 console is still able to detect the
PlacementRulewhich is created using command-line interface (CLI), do the following steps for creating the Subscription application in RHACM 2.8 withPlacementRule:-
Create a new project with application namespace. (for example:
busybox-application) -
Find the label for your managed cluster where you want to deploy the application. (for example,
drcluster1-jul-6) Create a
PlacementRuleCR on theapplication-namespacewith managed cluster label which was created in the previous step:Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
While creating the application using the RHACM console on the Subscription application page, choose this new
PlacementRule. -
Delete the
PlacementRulefrom the YAML editor, so it can re-use the chosen one.
-
Create a new project with application namespace. (for example:
BZ reference: [2216190]