OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Configuring OpenShift Data Foundation for Metro-DR with Advanced Cluster Management
DEVELOPER PREVIEW: Instructions about setting up OpenShift Data Foundation with Metro-DR capabilities. This solution is a Developer Preview feature and is not intended to be run in production environments.
Abstract
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
We appreciate your input on our documentation. Do let us know how we can make it better.
To give feedback, create a Bugzilla ticket:
- Go to the Bugzilla website.
- In the Component section, choose documentation.
- Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
- Click Submit Bug.
Chapter 1. Introduction to Metro-DR Copy linkLink copied to clipboard!
Disaster recovery is the ability to recover and continue business critical applications from natural or human created disasters. It is a component of the overall business continuance strategy of any major organization as designed to preserve the continuity of business operations during major adverse events.
Metro-DR capability provides volume persistent data and metadata replication across sites that are in the same geographical area. In the public cloud these would be similar to protecting from an Availability Zone failure. Metro-DR ensures business continuity during the unavailability of a data center with no data loss. This is usually expressed at Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
- RPO is a measure of how frequently you take backups or snapshots of persistent data. In practice, the RPO indicates the amount of data that will be lost or need to be reentered after an outage. Metro-DR solution ensures your RPO is zero because data is replicated in a synchronous fashion.
- RTO is the amount of downtime a business can tolerate. The RTO answers the question, “How long can it take for our system to recover after we were notified of a business disruption?”
The intent of this guide is to detail the Metro Disaster Recovery (Metro-DR) steps and commands necessary to be able to failover an application from one Red Hat OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the RHOCP clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM) and have distance limitations between the RHOCP clusters of less than 10 ms RTT latency.
The persistent storage for applications will be provided by an external Red Hat Ceph Storage cluster stretched between the two locations with the RHOCP instances connected to this storage cluster. An arbiter node with a storage monitor service will be required at a third location (different location than where RHOCP instances are deployed) to establish quorum for the Red Hat Ceph Storage cluster in the case of a site outage. The third location has relaxed latency requirements, which supports values as high up to 100 ms RTT latency from the storage cluster connected to the RHOCP instances.
1.1. Components of Metro-DR solution Copy linkLink copied to clipboard!
Metro-DR is composed of Red Hat Advanced Cluster Management for Kubernetes, Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters.
Red Hat Advanced Cluster Management for Kubernetes
Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment.
RHACM is split into two parts:
- RHACM Hub: components that run on the multi-cluster control plane
- Managed clusters: components that run on the clusters that are managed
For more information about this product, see RHACM documentation and the RHACM “Manage Applications” documentation.
Red Hat Ceph Storage
Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. It significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments.
OpenShift Data Foundation
OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. It is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack and Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications.
OpenShift Data Foundation stack is enhanced with the ability to provide csi-addons to manage per Persistent Volume Claim mirroring.
OpenShift DR
OpenShift DR is a disaster recovery orchestrator for stateful applications across a set of peer OpenShift clusters which are deployed and managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application’s state on Persistent Volumes. These include:
- Protecting an application state relationship across OpenShift clusters
- Failing over an application’s state to a peer cluster
- Relocate an application’s state to the previously deployed cluster
OpenShift DR is split into two components:
- OpenShift DR Hub Operator: Installed on the hub cluster to manage failover and relocation for applications.
- OpenShift DR Cluster Operator: Installed on each managed cluster to manage the lifecycle of all PVCs of an application.
1.2. Metro-DR deployment workflow Copy linkLink copied to clipboard!
This section provides an overview of the steps required to configure and deploy Metro-DR capabilities using OpenShift Data Foundation version 4.10, RHCS 5 and RHACM latest version across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Advanced Cluster Management.
To configure your infrastructure, perform the below steps in the order given:
- Ensure you meet each of the Metro-DR requirements which includes RHACM operator installation, creation or importing of OpenShift Container Platform into RHACM hub and network configuration. See Requirements for enabling Metro-DR.
- Ensure you meet the requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter. See Requirements for deploying Red Hat Ceph Storage.
- Configure Red Hat Ceph Storage stretch cluster mode. For instructions on enabling Ceph cluster on two different data centers using stretched mode functionality, see Configuring Red Hat Ceph Storage stretch cluster.
- Install OpenShift Data Foundation 4.10 on Primary and Secondary managed clusters. See Installing OpenShift Data Foundation on managed clusters.
- Install the Openshift DR Hub Operator on the Hub cluster. See Installing OpenShift DR Hub Operator on Hub cluster.
- Configure the managed and Hub cluster. See Configuring managed and hub clusters.
- Create the DRPolicy resource on the hub cluster which is used to deploy, failover, and relocate the workloads across managed clusters. See Creating Disaster Recovery Policy on Hub cluster.
- Enable automatic installation of the OpenShift DR Cluster operator and automatic transfer of S3 secrets on the managed clusters. For instructions, see Enabling automatic install of OpenShift DR cluster operator and Enabling automatic transfer of S3 secrets on managed clusters.
- Create a sample application using RHACM console for testing failover and relocation testing. For instructions, see Creating sample application, application failover and relocating an application between managed clusters.
Chapter 2. Requirements for enabling Metro-DR Copy linkLink copied to clipboard!
Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution:
Subscription requirements
- A valid Red Hat OpenShift Data Foundation Advanced entitlement
- A valid Red Hat Advanced Cluster Management for Kubernetes subscription
To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions.
You must have three OpenShift clusters that have network reachability between them:
- Hub cluster where Advanced Cluster Management for Kubernetes (RHACM operator) and OpenShift DR Hub controllers are installed.
- Primary managed cluster where OpenShift Data Foundation, OpenShift DR Cluster controller, and applications are installed.
- Secondary managed cluster where OpenShift Data Foundation, OpenShift DR Cluster controller, and applications are installed.
Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions.
- Once deployment is completed, login to the RHACM console using your OpenShift credentials.
Find the Route that has been created for the Advanced Cluster Manager console:
oc get route multicloud-console -n open-cluster-management -o jsonpath --template="https://{.spec.host}/multicloud/clusters{'\n'}"$ oc get route multicloud-console -n open-cluster-management -o jsonpath --template="https://{.spec.host}/multicloud/clusters{'\n'}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output:
https://multicloud-console.apps.perf3.example.com/multicloud/clusters
https://multicloud-console.apps.perf3.example.com/multicloud/clustersCopy to Clipboard Copied! Toggle word wrap Toggle overflow After logging in using your OpenShift credentials, you should see your local cluster imported.
- Ensure that you either import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. After the managed clusters are successfully created or imported, you can see the list of clusters that were imported or created on the console.
Chapter 3. Requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter Copy linkLink copied to clipboard!
Red Hat Ceph Storage is an open-source enterprise platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data, so you can focus on the applications and workloads that use it.
This section provides a basic overview of the Red Hat Ceph Storage deployment. For more complex deployment, refer to the official documentation guide for RHCS 5.
Only Flash media is supported since it runs with min_size=1 when degraded. Use stretch mode only with all-flash OSDs. Using all-flash OSDs minimizes the time needed to recover once connectivity is restored, thus minimizing the potential for data loss.
Erasure coded pools cannot be used with stretch mode.
3.1. Hardware requirements Copy linkLink copied to clipboard!
For information on minimum hardware requirements for deploying Red Hat Ceph Storage, see Minimum hardware recommendations for containerized Ceph.
| Node name | Datacenter | Ceph components |
|---|---|---|
| ceph1 | DC1 | OSD+MON+MGR |
| ceph2 | DC1 | OSD+MON |
| ceph3 | DC1 | OSD+MDS+RGW |
| ceph4 | DC2 | OSD+MON+MGR |
| ceph5 | DC2 | OSD+MON |
| ceph6 | DC2 | OSD+MDS+RGW |
| ceph7 | DC3 | MON |
3.2. Software requirements Copy linkLink copied to clipboard!
Use the latest software version of Red Hat Ceph Storage 5.
For more information on the supported Operating System versions for Red Hat Ceph Storage, see knowledgebase article on Red Hat Ceph Storage: Supported configurations.
3.3. Network configuration requirements Copy linkLink copied to clipboard!
The recommended Red Hat Ceph Storage configuration is as follows:
- You must have two separate networks, one public network and one private network.
You must have three different datacenters that support VLANS and subnets for Cephs private and public network for all datacenters.
NoteYou can use different subnets for each of the datacenters.
- The latencies between the two datacenters running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For the arbiter datacenter, this was tested with values as high as 100 ms RTT to the other two OSD datacenters.
Here is an example of a basic network configuration that we have used in this guide:
- DC1: Ceph public/private network: 10.0.40.0/24
- DC2: Ceph public/private network: 10.0.40.0/24
- DC3: Ceph public/private network: 10.0.40.0/24
For more information on the required network environment, see Ceph network configuration.
3.4. Node pre-deployment requirements Copy linkLink copied to clipboard!
Before installing the Red Hat Ceph Storage cluster, perform the following steps to fulfill all the requirements needed.
Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool:
subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0
subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable access for all the nodes in the Ceph cluster for the following repositories:
-
rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpmssubscription-manager repos --disable="*" --enable="rhel-8-for-x86_64-baseos-rpms" --enable="rhel-8-for-x86_64-appstream-rpms"
subscription-manager repos --disable="*" --enable="rhel-8-for-x86_64-baseos-rpms" --enable="rhel-8-for-x86_64-appstream-rpms"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Update the operating system RPMs to the latest version and reboot if needed:
dnf update -y reboot
dnf update -y rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Select a node from the cluster to be your bootstrap node.
ceph1is our bootstrap node in this example going forward.Only on the bootstrap node
ceph1, enable theansible-2.9-for-rhel-8-x86_64-rpmsandrhceph-5-tools-for-rhel-8-x86_64-rpmsrepositories:subscription-manager repos --enable="ansible-2.9-for-rhel-8-x86_64-rpms" --enable="rhceph-5-tools-for-rhel-8-x86_64-rpms"
subscription-manager repos --enable="ansible-2.9-for-rhel-8-x86_64-rpms" --enable="rhceph-5-tools-for-rhel-8-x86_64-rpms"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
hostnameusing the bare/short hostname in all the hosts.hostnamectl set-hostname <short_name>
hostnamectl set-hostname <short_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the hostname configuration for deploying Red Hat Ceph Storage with cephadm.
hostname
$ hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ceph1
ceph1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the long hostname with the
fqdnusing thehostname -foption.hostname -f
$ hostname -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ceph1.example.domain.com
ceph1.example.domain.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note: To know more about why these changes are required, see Fully Qualified Domain Names vs Bare Host Names.
Run the following steps on bootstrap node. In our example, the bootstrap node is
ceph1.Install the
cephadm-ansibleRPM package:sudo dnf install -y cephadm-ansible
$ sudo dnf install -y cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo run the ansible playbooks, you must have
sshpasswordless access to all the nodes that are configured to the Red Hat Ceph Storage cluster. Ensure that the configured user (for example,deployment-user) has root privileges to invoke thesudocommand without needing a password.To use a custom key, configure the selected user (for example,
deployment-user) ssh config file to specify the id/key that will be used for connecting to the nodes via ssh:cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF
cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow Build the ansible inventory
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteHosts configured as part of the [admin] group on the inventory file will be tagged as
_adminbycephadm, so they receive the admin ceph keyring during the bootstrap process.Verify that
ansiblecan access all nodes using ping module before running the pre-flight playbook.ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b
$ ansible -i /usr/share/cephadm-ansible/inventory -m ping all -bCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following ansible playbook.
ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
$ ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preflight playbook Ansible playbook configures the Red Hat Ceph Storage
dnfrepository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location forcephadm-ansibleandcephadm-preflight.ymlis/usr/share/cephadm-ansible.
3.5. Cluster bootstrapping and service deployment with Cephadm Copy linkLink copied to clipboard!
The cephadm utility installs and starts a single Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node where the cephadm bootstrap command is run.
For additional information on the bootstrapping process, see Bootstrapping a new storage cluster.
Procedure
Create json file to authenticate against the container registry using a json file as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cluster-spec.yamlthat adds the nodes to the RHCS cluster and also sets specific labels for where the services should run following table 3.1.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the IP for the NIC with the RHCS public network configured from the bootstrap node. After substituting
10.0.40.0with the subnet that you have defined in your ceph public network, execute the following command.ip a | grep 10.0.40
$ ip a | grep 10.0.40Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
10.0.40.78
10.0.40.78Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
Cephadmbootstrap command as the root user on the node that will be the initial Monitor node in the cluster. TheIP_ADDRESSoption is the node’s IP address that you are using to run thecephadm bootstrapcommand.NoteIf you have configured a different user instead of
rootfor passwordless SSH access, then use the--ssh-user=flag with thecepadm bootstrapcommand.cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json
$ cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf the local node uses fully-qualified domain names (FQDN), then add the
--allow-fqdn-hostnameoption tocephadm bootstrapon the command line.Once the bootstrap finishes, you will see the following output from the previous cephadm bootstrap command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the status of Red Hat Ceph Storage cluster deployment using the Ceph CLI client from ceph1:
ceph -s
$ ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt may take several minutes for all the services to start.
It is normal to get a global recovery event while you don’t have any osds configured.
You can use
ceph orch psandceph orch lsto further check the status of the services.Verify if all the nodes are part of the
cephadmcluster.ceph orch host ls
$ ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can run Ceph commands directly from the host because
ceph1was configured in thecephadm-ansibleinventory as part of the [admin] group. The Ceph admin keys were copied to the host during thecephadm bootstrapprocess.Check the current placement of the Ceph monitor services on the datacenters.
ceph orch ps | grep mon | awk '{print $1 " " $2}'$ ceph orch ps | grep mon | awk '{print $1 " " $2}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7
mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the current placement of the Ceph manager services on the datacenters.
ceph orch ps | grep mgr | awk '{print $1 " " $2}'$ ceph orch ps | grep mgr | awk '{print $1 " " $2}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5
mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the ceph osd crush map layout to ensure that each host has one OSD configured and its status is
UP. Also, double-check that each node is under the right datacenter bucket as specified in table 3.1ceph osd tree
$ ceph osd treeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create and enable a new RDB block pool.
ceph osd pool create rbdpool 32 32 ceph osd pool application enable rbdpool rbd
$ ceph osd pool create rbdpool 32 32 $ ceph osd pool application enable rbdpool rbdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe number 32 at the end of the command is the number of PGs assigned to this pool. The number of PGs can vary depending on several factors like the number of OSDs in the cluster, expected % used of the pool, etc. You can use the following calculator to determine the number of PGs needed: Ceph Placement Groups (PGs) per Pool Calculator.
Verify that the RBD pool has been created.
ceph osd lspools | grep rbdpool
$ ceph osd lspools | grep rbdpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
3 rbdpool
3 rbdpoolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that MDS services are active and has located one service on each datacenter.
ceph orch ps | grep mds
$ ceph orch ps | grep mdsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9
mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the CephFS volume.
ceph fs volume create cephfs
$ ceph fs volume create cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
ceph fs volume createcommand also creates the needed data and meta CephFS pools. For more information, see Configuring and Mounting Ceph File Systems.Check the
Cephstatus to verify how the MDS daemons have been deployed. Ensure that the state is active whereceph6is the primary MDS for this filesystem andceph3is the secondary MDS.ceph fs status
$ ceph fs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that RGW services are active.
ceph orch ps | grep rgw
$ ceph orch ps | grep rgwCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9
rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 4. Configuring Red Hat Ceph Storage stretch cluster Copy linkLink copied to clipboard!
Once the Red Hat Ceph Storage cluster is fully deployed using cephadm, use the following procedure to configure the stretch cluster mode. The new stretch mode is designed to handle the 2-site case.
Procedure
Check the current election strategy being used by the monitors with the ceph mon dump command. By default in a ceph cluster, the connectivity is set to classic.
ceph mon dump | grep election_strategy
ceph mon dump | grep election_strategyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
dumped monmap epoch 9 election_strategy: 1
dumped monmap epoch 9 election_strategy: 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the monitor election to connectivity.
ceph mon set election_strategy connectivity
ceph mon set election_strategy connectivityCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the previous ceph mon dump command again to verify the election_strategy value.
ceph mon dump | grep election_strategy
$ ceph mon dump | grep election_strategyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
dumped monmap epoch 10 election_strategy: 3
dumped monmap epoch 10 election_strategy: 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow To know more about the different election strategies, see Configuring monitor election strategy.
Set the location for all our Ceph monitors:
ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3
ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that each monitor has its appropriate location.
ceph mon dump
$ ceph mon dumpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a CRUSH rule that makes use of this OSD crush topology by installing the
ceph-baseRPM package in order to use thecrushtoolcommand:dnf -y install ceph-base
$ dnf -y install ceph-baseCopy to Clipboard Copied! Toggle word wrap Toggle overflow To know more about CRUSH ruleset, see Ceph CRUSH ruleset.
Get the compiled CRUSH map from the cluster:
ceph osd getcrushmap > /etc/ceph/crushmap.bin
$ ceph osd getcrushmap > /etc/ceph/crushmap.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Decompile the CRUSH map and convert it to a text file in order to be able to edit it:
crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt
$ crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following rule to the CRUSH map by editing the text file
/etc/ceph/crushmap.txtat the end of the file.vim /etc/ceph/crushmap.txt
$ vim /etc/ceph/crushmap.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe rule
idhas to be unique. In the example, we only have one more crush rule with id 0 hence we are using id 1. If your deployment has more rules created, then use the next free id.The CRUSH rule declared contains the following information:
Rule name:- Description: A unique whole name for identifying the rule.
-
Value:
stretch_rule
id:- Description: A unique whole number for identifying the rule.
-
Value:
1
type:- Description: Describes a rule for either a storage drive replicated or erasure-coded.
-
Value:
replicated
min_size:- Description: If a pool makes fewer replicas than this number, CRUSH will not select this rule.
-
Value:
1
max_size:- Description: If a pool makes more replicas than this number, CRUSH will not select this rule.
-
Value:
10
step take DC1- Description: Takes a bucket name (DC1), and begins iterating down the tree.
step chooseleaf firstn 2 type host- Description: Selects the number of buckets of the given type, in this case is two different hosts located in DC1.
step emit- Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule.
step take DC2- Description: Takes a bucket name (DC2), and begins iterating down the tree.
step chooseleaf firstn 2 type host- Description: Selects the number of buckets of the given type, in this case, is two different hosts located in DC2.
step emit- Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule.
Compile the new CRUSH map from the file
/etc/ceph/crushmap.txtand convert it to a binary file called/etc/ceph/crushmap2.bin:crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin
$ crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Inject the new crushmap we created back into the cluster:
ceph osd setcrushmap -i /etc/ceph/crushmap2.bin
$ ceph osd setcrushmap -i /etc/ceph/crushmap2.binCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
17
17Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe number 17 is a counter and it will increase (18,19, and so on) depending on the changes you make to the crush map.
Verify that the stretched rule created is now available for use.
ceph osd crush rule ls
ceph osd crush rule lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
replicated_rule stretch_rule
replicated_rule stretch_ruleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the stretch cluster mode.
ceph mon enable_stretch_mode ceph7 stretch_rule datacenter
$ ceph mon enable_stretch_mode ceph7 stretch_rule datacenterCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
ceph7is the arbiter node,stretch_ruleis the crush rule we created in the previous step anddatacenteris the dividing bucket.Verify all our pools are using the
stretch_ruleCRUSH rule we have created in our Ceph cluster:for pool in $(rados lspools);do echo -n "Pool: ${pool}; ";ceph osd pool get ${pool} crush_rule;done$ for pool in $(rados lspools);do echo -n "Pool: ${pool}; ";ceph osd pool get ${pool} crush_rule;doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This indicates that a working Red Hat Ceph Storage stretched cluster with arbiter mode is now available.
Chapter 5. Installing OpenShift Data Foundation on managed clusters Copy linkLink copied to clipboard!
In order to configure storage replication between the two OpenShift Container Platform clusters, OpenShift Data Foundation must be installed first on each managed cluster as follows:
- Install the latest OpenShift Data Foundation on each of the managed clusters.
After installing the operator, create StorageSystem using the option Connect with external storage platform.
For detailed instructions, refer to Deploying OpenShift Data foundation in external mode.
Validate the successful deployment of OpenShift Data foundation:
on each managed cluster with the following command:
oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{"\n"}'$ oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the Multicloud Gateway (MCG):
oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'$ oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the status result is
Readyfor both queries on the Primary managed cluster and the Secondary managed cluster, then continue with the next step.
The successful installation of OpenShift Data Foundation can also be validated in the OpenShift Container Platform Web Console by navigating to Storage and then Data Foundation.
Chapter 6. Installing OpenShift DR Hub Operator on Hub cluster Copy linkLink copied to clipboard!
Procedure
- On the Hub cluster, navigate to OperatorHub and use the search filter for OpenShift DR Hub Operator.
-
Follow the screen instructions to Install the operator into the project
openshift-dr-system. Verify that the operator Pod is in
Runningstate using the following command:oc get pods -n openshift-dr-system
$ oc get pods -n openshift-dr-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE ramen-hub-operator-898c5989b-96k65 2/2 Running 0 4m14s
NAME READY STATUS RESTARTS AGE ramen-hub-operator-898c5989b-96k65 2/2 Running 0 4m14sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Configuring managed and hub clusters Copy linkLink copied to clipboard!
7.1. Configuring SSL access between S3 endpoints Copy linkLink copied to clipboard!
Configure network (SSL) access between the s3 endpoints so that metadata can be stored on the alternate cluster in a MCG object bucket using a secure transport protocol and in addition, the Hub cluster needs to verify access to the object buckets.
If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped.
Procedure
Extract the ingress certificate for the Primary managed cluster and save the output to
primary.crt.oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crt$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the ingress certificate for the Secondary managed cluster and save the output to
secondary.crt.oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crt$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new ConfigMap to hold the remote cluster’s certificate bundle with filename
cm-clusters-crt.yamlon the Primary managed cluster, Secondary managed cluster, and the Hub cluster.NoteThere could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the
primary.crtandsecondary.crtfiles that were created before.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the ConfigMap file on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.
oc create -f cm-clusters-crt.yaml
$ oc create -f cm-clusters-crt.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
configmap/user-ca-bundle created
configmap/user-ca-bundle createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantFor the Hub cluster to verify access to the object buckets using the DRPolicy resource, the same ConfigMap
cm-clusters-crt.yamlmust also be created on the Hub cluster.Patch the default proxy resource on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.
oc patch proxy cluster --type=merge --patch='{"spec":{"trustedCA":{"name":"user-ca-bundle"}}}'$ oc patch proxy cluster --type=merge --patch='{"spec":{"trustedCA":{"name":"user-ca-bundle"}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
proxy.config.openshift.io/cluster patched
proxy.config.openshift.io/cluster patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2. Creating object buckets and S3StoreProfiles Copy linkLink copied to clipboard!
OpenShift DR requires S3 stores to store relevant cluster data of a workload from the managed clusters and to orchestrate a recovery of the workload during failover or relocate actions. These instructions are applicable for creating the necessary object bucket(s) using Multicloud Object Gateway (MCG). MCG should already be installed as a result of installing OpenShift Data Foundation.
Procedure
Create MCG object bucket or OBC to be used for storing persistent volume metadata on both the Primary and Secondary managed clusters.
Copy the following YAML file to filename
odrbucket.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a MCG bucket
odrbucketon both the Primary managed cluster and the Secondary managed cluster.oc create -f odrbucket.yaml
$ oc create -f odrbucket.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
objectbucketclaim.objectbucket.io/odrbucket created
objectbucketclaim.objectbucket.io/odrbucket createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Extract the
odrbucketOBC access key for each managed cluster as their base-64 encoded values by using the following command.oc get secret odrbucket -n openshift-storage -o jsonpath='{.data.AWS_ACCESS_KEY_ID}{"\n"}'$ oc get secret odrbucket -n openshift-storage -o jsonpath='{.data.AWS_ACCESS_KEY_ID}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
cFpIYTZWN1NhemJjbEUyWlpwN1E=
cFpIYTZWN1NhemJjbEUyWlpwN1E=Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the
odrbucketOBC secret key for each managed cluster as their base-64 encoded values by using the following command.oc get secret odrbucket -n openshift-storage -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}{"\n"}'$ oc get secret odrbucket -n openshift-storage -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
V1hUSnMzZUoxMHRRTXdGMU9jQXRmUlAyMmd5bGwwYjNvMHprZVhtNw==
V1hUSnMzZUoxMHRRTXdGMU9jQXRmUlAyMmd5bGwwYjNvMHprZVhtNw==Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The access key and secret key must be retrieved for the odrbucket OBC on both the Primary managed cluster and Secondary managed cluster.
7.3. Creating S3 secrets for Multicloud Object Gateway object buckets Copy linkLink copied to clipboard!
Now that the necessary information has been extracted for the object buckets in the previous section, there must be new Secrets created on the Hub cluster. These new Secrets will store the MCG object bucket access key and secret key for both managed clusters on the Hub cluster.
Procedure
Copy the following S3 secret YAML format for the Primary managed cluster to filename
odr-s3secret-primary.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create this secret on the Hub cluster.
oc create -f odr-s3secret-primary.yaml
$ oc create -f odr-s3secret-primary.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
secret/odr-s3secret-primary created
secret/odr-s3secret-primary createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the following S3 secret YAML format for the Secondary managed cluster to filename
odr-s3secret-secondary.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create this secret on the Hub cluster.
oc create -f odr-s3secret-secondary.yaml
$ oc create -f odr-s3secret-secondary.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
secret/odr-s3secret-secondary created
secret/odr-s3secret-secondary createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The values for the access key and secret key must be base-64 encoded. The encoded values for the keys were retrieved in the prior section.
7.4. Configure OpenShift DR Hub operator s3StoreProfiles Copy linkLink copied to clipboard!
To find the s3CompatibleEndpoint or route for MCG, execute the following command on the Primary managed cluster and the Secondary managed cluster:
Procedure
Search for the external S3 endpoint s3CompatibleEndpoint or route for MCG on each managed cluster by using the following command.
oc get route s3 -n openshift-storage -o jsonpath --template="https://{.spec.host}{'\n'}"$ oc get route s3 -n openshift-storage -o jsonpath --template="https://{.spec.host}{'\n'}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
https://s3-openshift-storage.apps.perf1.example.com
https://s3-openshift-storage.apps.perf1.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe unique s3CompatibleEndpoint route or
s3-openshift-storage.apps.<primary clusterID>.<baseDomain>ands3-openshift-storage.apps.<secondary clusterID>.<baseDomain>must be retrieved for both the Primary managed cluster and Secondary managed cluster respectively.Search for the
odrbucketOBC exact bucket name.oc get configmap odrbucket -n openshift-storage -o jsonpath='{.data.BUCKET_NAME}{"\n"}'$ oc get configmap odrbucket -n openshift-storage -o jsonpath='{.data.BUCKET_NAME}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
odrbucket-2f2d44e4-59cb-4577-b303-7219be809dcd
odrbucket-2f2d44e4-59cb-4577-b303-7219be809dcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe unique s3Bucket name odrbucket-<your value1> and odrbucket-<your value2> must be retrieved on both the Primary managed cluster and Secondary managed cluster respectively.
Modify the ConfigMap
ramen-hub-operator-configon the Hub cluster to add the new content.oc edit configmap ramen-hub-operator-config -n openshift-dr-system
$ oc edit configmap ramen-hub-operator-config -n openshift-dr-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following new content starting at
s3StoreProfilesto the ConfigMap on the Hub cluster only.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. Creating Disaster Recovery Policy on Hub cluster Copy linkLink copied to clipboard!
OpenShift DR uses Disaster Recovery Policy (DRPolicy) resources (cluster scoped) on the RHACM hub cluster to deploy, failover, and relocate workloads across managed clusters.
Prerequisites
- Ensure that there is a set of two clusters.
- Ensure that each cluster in the policy is assigned a S3 profile name, which is configured using the ConfigMap of the OpenShift DR cluster and hub operators.
Procedure
-
On the Hub cluster, navigate to Installed Operators in the
openshift-dr-systemproject and click on OpenShift DR Hub Operator. You should see two available APIs, DRPolicy and DRPlacementControl. - Click Create instance for DRPolicy and click YAML view.
Save the following YAML to filename
drpolicy.yamlafter replacing <cluster1> and <cluster2> with the correct names of your managed clusters in RHACM. Replace <string_value> with any value (i.e. metro).Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThere is no need to specify a namespace to create this resource because DRPolicy is a cluster-scoped resource.
-
Copy the contents of your unique
drpolicy.yamlfile into the YAML view. You must completely replace the original content. - Click Create on the YAML view screen.
To validate that the DRPolicy is created successfully and that the MCG object buckets can be accessed using the Secrets created earlier, run this command on the Hub cluster:
oc get drpolicy odr-policy -n openshift-dr-system -o jsonpath='{.status.conditions[].reason}{"\n"}'$ oc get drpolicy odr-policy -n openshift-dr-system -o jsonpath='{.status.conditions[].reason}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Succeeded
SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 9. Enabling automatic install of OpenShift DR cluster operator Copy linkLink copied to clipboard!
Once the DRPolicy is created successfully, the OpenShift DR Cluster operator can be installed on the Primary managed cluster and Secondary managed cluster in the openshift-dr-system namespace.
Procedure
Edit the ConfigMag
ramen-hub-operator-configon the Hub cluster and modify the value ofdeploymentAutomationEnabled=falsetotrueas follows:oc edit configmap ramen-hub-operator-config -n openshift-dr-system
$ oc edit configmap ramen-hub-operator-config -n openshift-dr-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the installation was successful in the Primary managed cluster and the Secondary managed cluster do the following command:
oc get csv,pod -n openshift-dr-system
$ oc get csv,pod -n openshift-dr-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.10.0 Openshift DR Cluster Operator 4.10.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-5564f9d669-f6lbc 2/2 Running 0 5m32s
NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.10.0 Openshift DR Cluster Operator 4.10.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-5564f9d669-f6lbc 2/2 Running 0 5m32sCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also go to OperatorHub on each of the managed clusters and verify if the
OpenShift DR Cluster Operatoris installed.
Chapter 10. Enabling automatic transfer of s3Secrets to managed clusters Copy linkLink copied to clipboard!
Follow this procedure to enable auto transfer of s3Secrets to the required OpenShift DR cluster components. It updates the OpenShift DR cluster namespace with the s3Secrets that are required to access the s3Profiles in the OpenShift DR config map.
Procedure
Edit the ConfigMag
ramen-hub-operator-configon the Hub cluster to adds3SecretDistributionEnabled=trueas follows:oc edit configmap ramen-hub-operator-config -n openshift-dr-system
$ oc edit configmap ramen-hub-operator-config -n openshift-dr-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that transfer of secrets was successful by running this command in both managed clusters.
oc get secrets -n openshift-dr-system | grep Opaque
$ oc get secrets -n openshift-dr-system | grep OpaqueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
8b3fb9ed90f66808d988c7edfa76eba35647092 Opaque 2 11m af5f82f21f8f77faf3de2553e223b535002e480 Opaque 2 11m
8b3fb9ed90f66808d988c7edfa76eba35647092 Opaque 2 11m af5f82f21f8f77faf3de2553e223b535002e480 Opaque 2 11mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 11. Creating a sample application Copy linkLink copied to clipboard!
In order to test failover from the Primary managed cluster to the Secondary managed cluster and back again we need a simple application. Use the sample application called busybox as an example.
Procedure
Create a namespace or project on the Hub cluster for a
busyboxsample application.oc new-project busybox-sample
$ oc new-project busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteA different project name other than
busybox-samplecan be used if desired. Make sure when deploying the sample application via the Advanced Cluster Manager console to use the same project name as what is created in this step.Create DRPlacementControl resource
DRPlacementControl is an API available after the OpenShift DR Hub Operator is installed on the Hub cluster. It is broadly an Advanced Cluster Manager PlacementRule reconciler that orchestrates placement decisions based on data availability across clusters that are part of a DRPolicy.
-
On the Hub cluster, navigate to Installed Operators in the
busybox-sampleproject and click on OpenShift DR Hub Operator. You should see two available APIs, DRPolicy and DRPlacementControl. -
Create an instance for DRPlacementControl and then go to the YAML view. Make sure the
busybox-sampleproject is selected. Copy and save the following YAML to filename
busybox-drpc.yamlafter replacing <cluster1> with the correct name of your managed cluster in Advanced Cluster Manager.Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Copy the contents of your unique
busybox-drpc.yamlfile into the YAML view (completely replacing original content). Click Create on the YAML view screen.
You can also create this resource using the following CLI command:
oc create -f busybox-drpc.yaml -n busybox-sample
$ oc create -f busybox-drpc.yaml -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
drplacementcontrol.ramendr.openshift.io/busybox-drpc created
drplacementcontrol.ramendr.openshift.io/busybox-drpc createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis resource must be created in the
busybox-samplenamespace (or whatever namespace you created earlier).
-
On the Hub cluster, navigate to Installed Operators in the
Create Placement Rule resource that defines the target clusters where resource templates can be deployed. Use placement rules to facilitate the multicluster deployment of your applications.
Copy and save the following YAML to filename
busybox-placementrule.yaml.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Placement Rule resource for the
busybox-sampleapplication.oc create -f busybox-placementrule.yaml -n busybox-sample
$ oc create -f busybox-placementrule.yaml -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
placementrule.apps.open-cluster-management.io/busybox-placement created
placementrule.apps.open-cluster-management.io/busybox-placement createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThis resource must be created in the
busybox-samplenamespace (or whatever namespace you created earlier).
Create sample application using RHACM console
Log in to the RHACM console using your OpenShift credentials if not already logged in.
oc get route multicloud-console -n open-cluster-management -o jsonpath --template="https://{.spec.host}/multicloud/applications{'\n'}"$ oc get route multicloud-console -n open-cluster-management -o jsonpath --template="https://{.spec.host}/multicloud/applications{'\n'}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output:
https://multicloud-console.apps.perf3.example.com/multicloud/applications
https://multicloud-console.apps.perf3.example.com/multicloud/applicationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Navigate to Applications and click Create application.
- Select type as Subscription.
-
Enter your application Name (for example,
busybox) and Namespace (for example,busybox-sample). -
In Repository location for resources section, select Repository type
Git. Enter the Git repository URL for the sample application, the github Branch and Path where the resources
busyboxPod and PVC will be created.Use the sample application repository as
https://github.com/RamenDR/ocm-ramen-sampleswhere the Branch ismainand Path isbusybox-odr-metro.- Scroll down the form to the section Select clusters to deploy to and click Select an existing placement configuration.
-
Select an Existing Placement Rule (for example,
busybox-placement) from the drop-down list. Click Save.
On the follow-on screen scroll to the bottom. You should see that there are all Green checkmarks on the application topology.
NoteTo get more information, click on any of the topology elements and a window will appear on the right of the topology view.
Validating the sample application deployment and replication.
Now that the
busyboxapplication has been deployed to your preferred Cluster (specified in the DRPlacementControl) the deployment can be validated.Login to your managed cluster where
busyboxwas deployed by RHACM.oc get pods,pvc -n busybox-sample
$ oc get pods,pvc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 6m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-a56c138a-a1a9-4465-927f-af02afbbff37 1Gi RWO ocs-storagecluster-ceph-rbd 6m
NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 6m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-a56c138a-a1a9-4465-927f-af02afbbff37 1Gi RWO ocs-storagecluster-ceph-rbd 6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the replication resources are also created for the
busyboxPVC.oc get volumereplicationgroup -n busybox-sample
$ oc get volumereplicationgroup -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME AGE volumereplicationgroup.ramendr.openshift.io/busybox-drpc 6m
NAME AGE volumereplicationgroup.ramendr.openshift.io/busybox-drpc 6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.1. Deleting sample application Copy linkLink copied to clipboard!
You can delete the sample application busybox using the RHACM console.
The instructions to delete the sample application should not be executed until the failover and failback (relocate) testing is completed and the application is ready to be removed from RHACM and the managed clusters.
Procedure
- On the RHACM console, navigate to Applications.
-
Search for the sample application to be deleted (for example,
busybox). - Click the Action Menu (⋮) next to the application you want to delete.
Click Delete application.
When Delete application is selected a new screen will appear asking if the application related resources should also be deleted.
- Select Remove application related resources checkbox to delete the Subscription and PlacementRule.
- Click Delete. This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on).
In addition to the resources deleted using the RHACM console, the
DRPlacementControlmust also be deleted immediately after deleting thebusyboxapplication.-
Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project
busybox-sample. - Click OpenShift DR Hub Operator and then click DRPlacementControl tab.
-
Click the Action Menu (⋮) next to the
busyboxapplication DRPlacementControl that you want to delete. - Click Delete DRPlacementControl.
- Click Delete.
-
Login to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project
This process can be used to delete any application with a DRPlacementControl resource. The DRPlacementControl resource can also be deleted in the application namespace using CLI.
Chapter 12. Application failover between managed clusters Copy linkLink copied to clipboard!
This section provides instructions on how to failover the busybox sample application. The failover method for Metro-DR is application based. Each application that is to be protected in this manner must have a corresponding DRPlacementControl resource and a PlacementRule resource created in the application namespace as shown in the Create Sample Application for DR testing section.
Procedure
Create NetworkFence resource and enable Fencing.
Specify the list of CIDR blocks or IP addresses on which network fencing operation will be performed. In our case, this will be the EXTERNAL-IP of every OpenShift node in the cluster that needs to be fenced from using the external RHCS cluster.
Execute this command to get the IP addresses for the Primary managed cluster.
oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}'$ oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteCollect the current IP addresses of all OpenShift nodes before there is a site outage. Best practice would be to create the NetworkFence YAML file and have it available and up-to-date for a disaster recovery event.
The IP addresses for all nodes will be added to the NetworkFence example resource as shown below. This example is for six nodes but there could be more nodes in your cluster.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the YAML file example above, modify the IP addresses and provide the correct <cluster1> to be the cluster name found in RHACM for the Primary managed cluster. Save this to filename
network-fence-<cluster1>.yaml.ImportantThe NetworkFence must be created from the opposite managed cluster where the application is currently running prior to failover. In this case, that is the Secondary managed cluster.
oc create -f network-fence-<cluster1>.yaml
$ oc create -f network-fence-<cluster1>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
networkfences.csiaddons.openshift.io/network-fence-ocp4perf1 created
networkfences.csiaddons.openshift.io/network-fence-ocp4perf1 createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantAfter the NetworkFence is created, all communication from applications to the OpenShift Data Foundation storage will fail and some Pods will be in an unhealthy state (For example: CreateContainerError, CrashLoopBackOff) on the cluster that is now fenced.
In the same cluster as where the NetworkFence was created, verify that the status is Succeeded. Modify <cluster1> to be correct.
export NETWORKFENCE=network-fence-<cluster1> oc get networkfences.csiaddons.openshift.io/$NETWORKFENCE -n openshift-dr-system -o jsonpath='{.status.result}{"\n"}'export NETWORKFENCE=network-fence-<cluster1> oc get networkfences.csiaddons.openshift.io/$NETWORKFENCE -n openshift-dr-system -o jsonpath='{.status.result}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Succeeded
SucceededCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Modify DRPolicy for the
fencedcluster.Edit the DRPolicy on the Hub cluster and change <cluster1> (for example: ocp4perf1) from
UnfencedtoManuallyFenced.oc edit drpolicy odr-policy
$ oc edit drpolicy odr-policyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
drpolicy.ramendr.openshift.io/odr-policy edited
drpolicy.ramendr.openshift.io/odr-policy editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate the DRPolicy status in the Hub cluster has changed to
Fencedfor the Primary managed cluster.oc get drpolicies.ramendr.openshift.io odr-policy -o yaml | grep -A 6 drClusters
$ oc get drpolicies.ramendr.openshift.io odr-policy -o yaml | grep -A 6 drClustersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Modify DRPlacementControl to
failover- On the Hub cluster navigate to Installed Operators and then click Openshift DR Hub Operator.
- Click DRPlacementControl tab.
-
Click DRPC
busybox-drpcand then the YAML view. Add the
actionandfailoverClusterdetails as shown in below screenshot. ThefailoverClustershould be the ACM cluster name for the Secondary managed cluster.DRPlacementControl add action Failover
- Click Save.
Verify that the application
busyboxis now running in the Secondary managed cluster, the failover clusterocp4perf2specified in the YAML file.oc get pods,pvc -n busybox-sample
$ oc get pods,pvc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 35s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-79f2a74d-6e2c-48fb-9ed9-666b74cfa1bb 5Gi RWO ocs-storagecluster-ceph-rbd 35s
NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 35s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-79f2a74d-6e2c-48fb-9ed9-666b74cfa1bb 5Gi RWO ocs-storagecluster-ceph-rbd 35sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that
busyboxis no longer running on the Primary managed cluster.oc get pods,pvc -n busybox-sample
$ oc get pods,pvc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
No resources found in busybox-sample namespace.
No resources found in busybox-sample namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Be aware of known Metro-DR issues as documented in Known Issues section of Release Notes.
Chapter 13. Relocating an application between managed clusters Copy linkLink copied to clipboard!
A relocation operation is very similar to failover. Relocate is application based and uses the DRPlacementControl to trigger the relocation. The main difference for failback is that the application is scaled down on the failoverCluster and therefore creating a NetworkFence is not required.
Procedure
Remove NetworkFence resource and disable
Fencing.Before a failback or relocate action can be successful the NetworkFence for the Primary managed cluster must be deleted.
Execute this command in the Secondary managed cluster and modify <cluster1> to be correct for the NetworkFence YAML filename created in the prior section.
oc delete -f network-fence-<cluster1>.yaml
$ oc delete -f network-fence-<cluster1>.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
networkfence.csiaddons.openshift.io "network-fence-ocp4perf1" deleted
networkfence.csiaddons.openshift.io "network-fence-ocp4perf1" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot OpenShift Container Platform nodes that were
Fenced.This step is required because some application Pods on the prior fenced cluster, in this case the Primary managed cluster, are in an unhealthy state (For example: CreateContainerError, CrashLoopBackOff). This can be most easily fixed by rebooting all worker OpenShift nodes one at a time.
NoteThe OpenShift Web Console dashboards and Overview page can also be used to assess the health of applications and the external storage. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage → Data Foundation.
Verify all Pods are in a healthy state by running this command on the Primary managed cluster after all OpenShift nodes have rebooted and are in a
Readystatus. The output for this query should be zero Pods.oc get pods -A | egrep -v 'Running|Completed'
$ oc get pods -A | egrep -v 'Running|Completed'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAMESPACE NAME READY STATUS RESTARTS AGE
NAMESPACE NAME READY STATUS RESTARTS AGECopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy.
Modify DRPolicy to
Unfencedstatus.In order for the ODR HUB operator to know the NetworkFence has been removed for the Primary managed cluster the DRPolicy must be modified for the newly
Unfencedcluster.Edit the DRPolicy on the Hub cluster and change <cluster1> (example
ocp4perf1) fromManuallyFencedtoUnfenced.oc edit drpolicy odr-policy
$ oc edit drpolicy odr-policyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
drpolicy.ramendr.openshift.io/odr-policy edited
drpolicy.ramendr.openshift.io/odr-policy editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the status of DRPolicy in the Hub cluster has changed to
Unfencedfor the Primary managed cluster.oc get drpolicies.ramendr.openshift.io odr-policy -o yaml | grep -A 6 drClusters
$ oc get drpolicies.ramendr.openshift.io odr-policy -o yaml | grep -A 6 drClustersCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Modify DRPlacementControl to failback
- On the Hub cluster navigate to Installed Operators and then click Openshift DR Hub Operator.
- Click DRPlacementControl tab.
-
Click DRPC
busybox-drpcand then the YAML view. Modify action to
Relocate.DRPlacementControl modify action to Relocate
- Click Save.
Verify if the application
busyboxis now running in the Primary managed cluster.The failback is to the preferredClusterocp4perf1as specified in the YAML file, which is where the application was running before the failover operation.oc get pods,pvc -n busybox-sample
$ oc get pods,pvc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 60s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-79f2a74d-6e2c-48fb-9ed9-666b74cfa1bb 5Gi RWO ocs-storagecluster-ceph-rbd 61s
NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 60s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-79f2a74d-6e2c-48fb-9ed9-666b74cfa1bb 5Gi RWO ocs-storagecluster-ceph-rbd 61sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify if
busyboxis running in the Secondary managed cluster. The busybox application should no longer be running on this managed cluster.oc get pods,pvc -n busybox-sample
$ oc get pods,pvc -n busybox-sampleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
No resources found in busybox-sample namespace.
No resources found in busybox-sample namespace.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Be aware of known Metro-DR issues as documented in Known Issues section of Release Notes.