Chapter 6. Mirroring Ceph block devices
As a storage administrator, you can add another layer of redundancy to Ceph block devices by mirroring data images between Red Hat Ceph Storage clusters. Understanding and using Ceph block device mirroring can provide you protection against data loss, such as a site failure. There are two configurations for mirroring Ceph block devices, one-way mirroring or two-way mirroring, and you can configure mirroring on pools and individual images.
Prerequisites
- A minimum of two healthy running Red Hat Ceph Storage clusters.
- Network connectivity between the two storage clusters.
- Access to a Ceph client node for each Red Hat Ceph Storage cluster.
- A CephX user with administrator-level capabilities.
6.1. Ceph block device mirroring Copy linkLink copied to clipboard!
RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph Block Device images between two or more Ceph storage clusters. By locating a Ceph storage cluster in different geographic locations, RBD Mirroring helps help ensure high availability and disaster recovery by keeping a remote copy of RBD images.
The CRUSH hierarchies for the primary and secondary pools that are used for block device mirroring must have similar capacity and performance. You also need enough network bandwidth to avoid delays during mirroring. For example for journal-based mirroring, if the average write speed to images in the primary cluster is X MB/s, the network must support at least N*X throughput (where N is the number of mirrored images), plus some extra bandwidth (Y%) for safety.
The rbd-mirror daemon handles the synchronization by pulling changes from the primary image in one cluster and writing them to the mirrored image in another cluster.
6.1.1. Modes of RBD mirroring Copy linkLink copied to clipboard!
There are 2 modes of RBD mirroring:
- Journal-based mirroring
- This mode uses the RBD journaling image feature to provide point-in-time, crash-consistent replication between clusters by first recording all write operations in a journal before applying them to the image. The remote cluster replays these journal entries to maintain a consistent, up-to-date replica.
Because every write operation involves writing to both the journal and the image, write latency may almost double when journaling is enabled.
- Snapshot-based mirroring
- This mode uses RBD mirror snapshots to replicate images by identifying and copying only the changed data and metadata between two snapshots. The remote cluster applies these differences to maintain a consistent replica. The RBD fast-diff image feature accelerates this process by quickly identifying modified data blocks without scanning the entire image. During a failover, the full set of snapshot deltas must be synchronized. Any partial updates are automatically rolled back to ensure image consistency.
6.2. Configuring Ceph Block Device mirroring overview Copy linkLink copied to clipboard!
The rbd-mirror
daemon can run either on a single Ceph storage cluster for one-way mirroring or on two Ceph storage clusters for two-way mirroring that participate in the mirroring relationship.
The rbd-mirror
daemon is responsible for synchronizing images from one Ceph storage cluster to another Ceph storage cluster by pulling changes from the remote primary image and writes those changes to the local, non-primary image.
For RBD mirroring to work, either using one-way or two-way replication, a couple of assumptions are made:
- A pool with the same name exists on both storage clusters.
- For journal-based mirroring, a pool contains journal-enabled images that you want to mirror.
In one-way or two-way replication, each instance of rbd-mirror
must be able to connect to the other Ceph storage cluster simultaneously. Also, the network must have sufficient bandwidth between the two data center sites to handle mirroring.
Mirroring is configured on a per-pool basis with mirror peering storage clusters. Ceph supports two mirroring modes, depending on the type of images in the pool.
- Pool Mode
- All images in a pool with the journaling feature enabled are mirrored.
- Image Mode
- Only a specific subset of images within a pool are mirrored. You must enable mirroring for each image separately.
Whether or not an image can be modified depends on its state:
- You can modify images in the primary state.
- You can not modify images in the non-primary state.
Images are automatically promoted to primary when mirroring is enabled on an image. The promotion can happen:
- Implicitly by enabling mirroring in pool mode and only for journal-based images.
- Explicitly by enabling the mirroring of a specific image.
It is possible to demote primary images and promote non-primary images.
6.2.1. One-way Replication Copy linkLink copied to clipboard!
One-way mirroring implies that a primary image or pool of images in one storage cluster gets replicated to a secondary storage cluster. One-way mirroring also supports replicating to multiple secondary storage clusters.
On the secondary storage cluster, the image is the non-primary replicate; that is, Ceph clients cannot write to the image. When data is mirrored from a primary storage cluster to a secondary storage cluster, the rbd-mirror
runs only on the secondary storage cluster.
For one-way mirroring to work, a couple of assumptions are made:
- You have two Red Hat Storage Ceph storage clusters and you want to replicate images from a primary storage cluster to a secondary storage cluster.
-
The secondary storage cluster contains the
rbd-mirror
daemon that can run on one of the cluster nodes. Therbd-mirror
daemon connects to the primary storage cluster to sync images to the secondary storage cluster.
Figure 6.1. One-way mirroring
6.2.2. Two-way Replication Copy linkLink copied to clipboard!
Two-way replication adds an rbd-mirror
daemon on the primary cluster so images can be demoted on it and promoted on the secondary cluster. Changes can then be made to the images on the secondary cluster and they will be replicated in the reverse direction, from secondary to primary. Both clusters must have rbd-mirror
running to allow promoting and demoting images on either cluster. Currently, two-way replication is only supported between two sites.
For two-way mirroring to work, a couple of assumptions are made:
- You have two storage clusters and you want to be able to replicate images between them in either direction.
-
Both storage clusters have
rbd-mirror
daemon running. The images that are primary on cluster1 are synced by therbd-mirror
daemon on the remote cluster and the images on the remote cluster are synced to the primary.
Figure 6.2. Two-way mirroring
6.2.3. Configuring one-way mirroring using the command-line interface Copy linkLink copied to clipboard!
When using one-way replication, you can mirror to multiple secondary storage clusters.
Examples in this section distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a
, and the secondary storage cluster you are replicating the images to, as site-b
. The pool name that is used in these examples is called data
.
Prerequisites
- A minimum of two running Red Hat Ceph Storage clusters.
- Root-level access to a Ceph client node for each storage cluster.
- A CephX user with administrator-level capabilities.
Procedure
Log into the
cephadm
shell on both the sites:Example
cephadm shell cephadm shell
[root@site-a ~]# cephadm shell [root@site-b ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On
site-b
, schedule the deployment of mirror daemon on the secondary cluster:Syntax
ceph orch apply rbd-mirror --placement=NODENAME
ceph orch apply rbd-mirror --placement=NODENAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-b /]# ceph orch apply rbd-mirror --placement=host04
[ceph: root@site-b /]# ceph orch apply rbd-mirror --placement=host04
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
nodename
is the host where you want to configure mirroring in the secondary cluster.Enable journaling features on an image on
site-a
.For new images, use the
--image-feature
option:Syntax
rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE
rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling
[ceph: root@site-a /]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
exclusive-lock
is already enabled, usejournaling
as the only argument, else it returns the following error:one or more requested features are already enabled (22) Invalid argument
one or more requested features are already enabled (22) Invalid argument
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For existing images, use the
rbd feature enable
command:Syntax
rbd feature enable POOL_NAME/IMAGE_NAME FEATURE, FEATURE
rbd feature enable POOL_NAME/IMAGE_NAME FEATURE, FEATURE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd feature enable data/image1 exclusive-lock, journaling
[ceph: root@site-a /]# rbd feature enable data/image1 exclusive-lock, journaling
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable journaling on all new images by default, set the configuration parameter using
ceph config set
command:Example
[ceph: root@site-a /]# ceph config set global rbd_default_features 125 [ceph: root@site-a /]# ceph config show mon.host01 rbd_default_features
[ceph: root@site-a /]# ceph config set global rbd_default_features 125 [ceph: root@site-a /]# ceph config show mon.host01 rbd_default_features
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Choose the mirroring mode, either pool or image mode, on both the storage clusters.
Enabling pool mode:
Syntax
rbd mirror pool enable POOL_NAME MODE
rbd mirror pool enable POOL_NAME MODE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd mirror pool enable data pool [ceph: root@site-b /]# rbd mirror pool enable data pool
[ceph: root@site-a /]# rbd mirror pool enable data pool [ceph: root@site-b /]# rbd mirror pool enable data pool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables mirroring of the whole pool named
data
.Enabling image mode:
Syntax
rbd mirror pool enable POOL_NAME MODE
rbd mirror pool enable POOL_NAME MODE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd mirror pool enable data image [ceph: root@site-b /]# rbd mirror pool enable data image
[ceph: root@site-a /]# rbd mirror pool enable data image [ceph: root@site-b /]# rbd mirror pool enable data image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables image mode mirroring on the pool named
data
.NoteTo enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details.
Verify that mirroring has been successfully enabled at both the sites:
Syntax
rbd mirror pool info POOL_NAME
rbd mirror pool info POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
On a Ceph client node, bootstrap the storage cluster peers.
Create Ceph user accounts, and register the storage cluster peer to the pool:
Syntax
rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN
rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
[ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example bootstrap command creates the
client.rbd-mirror.site-a
and theclient.rbd-mirror-peer
Ceph users.-
Copy the bootstrap token file to the
site-b
storage cluster. Import the bootstrap token on the
site-b
storage cluster:Syntax
rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN
rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a
[ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor one-way RBD mirroring, you must use the
--direction rx-only
argument, as two-way mirroring is the default when bootstrapping peers.
Verification steps
To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites:
Syntax
rbd mirror image status POOL_NAME/IMAGE_NAME
rbd mirror image status POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here,
up
means therbd-mirror
daemon is running, andstopped
means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster.Example
[ceph: root@mon-site-b /]# rbd mirror image status data/image1 image1: global_id: c13d8065-b33d-4cb5-b35f-127a02768e7f
[ceph: root@mon-site-b /]# rbd mirror image status data/image1 image1: global_id: c13d8065-b33d-4cb5-b35f-127a02768e7f
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.4. Configuring two-way mirroring using the command-line interface Copy linkLink copied to clipboard!
When using two-way replication you can only mirror between two storage clusters.
Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a
, and the secondary storage cluster you are replicating the images to, as site-b
. The pool name used in these examples is called data
.
Prerequisites
- A minimum of two running Red Hat Ceph Storage clusters.
- Root-level access to a Ceph client node for each storage cluster.
- A CephX user with administrator-level capabilities.
Procedure
Log into the
cephadm
shell on both the sites:Example
cephadm shell cephadm shell
[root@site-a ~]# cephadm shell [root@site-b ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the
site-a
primary cluster, run the following command:Example
[ceph: root@site-a /]# ceph orch apply rbd-mirror --placement=host01
[ceph: root@site-a /]# ceph orch apply rbd-mirror --placement=host01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
nodename
is the host where you want to configure mirroring.On
site-b
, schedule the deployment of mirror daemon on the secondary cluster:Syntax
ceph orch apply rbd-mirror --placement=NODENAME
ceph orch apply rbd-mirror --placement=NODENAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-b /]# ceph orch apply rbd-mirror --placement=host04
[ceph: root@site-b /]# ceph orch apply rbd-mirror --placement=host04
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
nodename
is the host where you want to configure mirroring in the secondary cluster.Enable journaling features on an image on
site-a
.For new images, use the
--image-feature
option:Syntax
rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE
rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling
[ceph: root@site-a /]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf
exclusive-lock
is already enabled, usejournaling
as the only argument, else it returns the following error:one or more requested features are already enabled (22) Invalid argument
one or more requested features are already enabled (22) Invalid argument
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For existing images, use the
rbd feature enable
command:Syntax
rbd feature enable POOL_NAME/IMAGE_NAME FEATURE, FEATURE
rbd feature enable POOL_NAME/IMAGE_NAME FEATURE, FEATURE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd feature enable data/image1 exclusive-lock, journaling
[ceph: root@site-a /]# rbd feature enable data/image1 exclusive-lock, journaling
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To enable journaling on all new images by default, set the configuration parameter using
ceph config set
command:Example
[ceph: root@site-a /]# ceph config set global rbd_default_features 125 [ceph: root@site-a /]# ceph config show mon.host01 rbd_default_features
[ceph: root@site-a /]# ceph config set global rbd_default_features 125 [ceph: root@site-a /]# ceph config show mon.host01 rbd_default_features
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Choose the mirroring mode, either pool or image mode, on both the storage clusters.
Enabling pool mode:
Syntax
rbd mirror pool enable POOL_NAME MODE
rbd mirror pool enable POOL_NAME MODE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd mirror pool enable data pool [ceph: root@site-b /]# rbd mirror pool enable data pool
[ceph: root@site-a /]# rbd mirror pool enable data pool [ceph: root@site-b /]# rbd mirror pool enable data pool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables mirroring of the whole pool named
data
.Enabling image mode:
Syntax
rbd mirror pool enable POOL_NAME MODE
rbd mirror pool enable POOL_NAME MODE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@site-a /]# rbd mirror pool enable data image [ceph: root@site-b /]# rbd mirror pool enable data image
[ceph: root@site-a /]# rbd mirror pool enable data image [ceph: root@site-b /]# rbd mirror pool enable data image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables image mode mirroring on the pool named
data
.NoteTo enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details.
Verify that mirroring has been successfully enabled at both the sites:
Syntax
rbd mirror pool info POOL_NAME
rbd mirror pool info POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
On a Ceph client node, bootstrap the storage cluster peers.
Create Ceph user accounts, and register the storage cluster peer to the pool:
Syntax
rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN
rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
[ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example bootstrap command creates the
client.rbd-mirror.site-a
and theclient.rbd-mirror-peer
Ceph users.-
Copy the bootstrap token file to the
site-b
storage cluster. Import the bootstrap token on the
site-b
storage cluster:Syntax
rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-tx POOL_NAME PATH_TO_BOOTSTRAP_TOKEN
rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-tx POOL_NAME PATH_TO_BOOTSTRAP_TOKEN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-tx data /root/bootstrap_token_site-a
[ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-tx data /root/bootstrap_token_site-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
--direction
argument is optional, as two-way mirroring is the default when bootstrapping peers.
Verification steps
To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites:
Syntax
rbd mirror image status POOL_NAME/IMAGE_NAME
rbd mirror image status POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Here,
up
means therbd-mirror
daemon is running, andstopped
means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If images are in the state
up+replaying
, then mirroring is functioning properly. Here,up
means therbd-mirror
daemon is running, andreplaying
means this image is the target for replication from another storage cluster.NoteDepending on the connection between the sites, mirroring can take a long time to sync the images.
6.3. Administering RBD mirroring Copy linkLink copied to clipboard!
As a storage administrator, you can perform a range of tasks to manage and maintain the Ceph Block Device (RBD) mirroring environment.
These tasks include:
- Viewing information about storage cluster peers.
- Add or remove a storage cluster peer.
- Getting mirroring status for a pool or image.
- Enabling mirroring on a pool or image.
- Disabling mirroring on a pool or image.
- Delaying block device replication.
- Promoting and demoting an image.
These tasks help ensure smooth replication operations and support failover and recovery scenarios in multi-site deployments.
6.3.1. Viewing peer information Copy linkLink copied to clipboard!
View information about storage cluster peers.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
To view information about the peers:
Syntax
rbd mirror pool info POOL_NAME
rbd mirror pool info POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2. Managing mirroring on a pool Copy linkLink copied to clipboard!
You can enable, disable and get the status of mirroring on a Ceph Block Device pool by using the command-line interface.
Prerequisites
- Before you begin, make sure that you have root-level access to the node.
6.3.2.1. Enabling mirroring on a pool Copy linkLink copied to clipboard!
Run the following commands on both peer clusters to enable mirroring on a pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
To enable mirroring on a pool:
Syntax
rbd mirror pool enable POOL_NAME MODE
rbd mirror pool enable POOL_NAME MODE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool enable data pool
[root@rbd-client ~]# rbd mirror pool enable data pool
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables mirroring of the whole pool named
data
.Example
rbd mirror pool enable data image
[root@rbd-client ~]# rbd mirror pool enable data image
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables image mode mirroring on the pool named
data
.
Additional Resources
- See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details.
6.3.2.2. Disabling mirroring on a pool Copy linkLink copied to clipboard!
Before disabling mirroring, remove the peer clusters.
When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
To disable mirroring on a pool:
Syntax
rbd mirror pool disable POOL_NAME
rbd mirror pool disable POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool disable data
[root@rbd-client ~]# rbd mirror pool disable data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example disables mirroring of a pool named
data
.
6.3.2.3. Getting mirroring status for a pool Copy linkLink copied to clipboard!
You can get the mirror status for a pool on the storage clusters.
Prerequisites
- A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
- Root-level access to the node.
Procedure
To get the mirroring pool summary:
Syntax
rbd mirror pool status POOL_NAME
rbd mirror pool status POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipTo output status details for every mirroring image in a pool, use the
--verbose
option.
6.3.3. Managing mirroring on images Copy linkLink copied to clipboard!
You can enable and disable mirroring on Ceph Block Device images by using the command-line interface. You can also get the mirroring status for an image. Also, learn to promote, demote, and resynchronize an image.
6.3.3.1. Enabling image mirroring Copy linkLink copied to clipboard!
Enable mirroring on the whole pool in image mode on both peer storage clusters.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
Enable mirroring for a specific image within the pool:
Syntax
rbd mirror image enable POOL_NAME/IMAGE_NAME
rbd mirror image enable POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image enable data/image2
[root@rbd-client ~]# rbd mirror image enable data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables mirroring for the
image2
image in thedata
pool.
Additional Resources
- See the Enabling mirroring on a pool section in the Red Hat Ceph Storage Block Device Guide for details.
6.3.3.2. Disabling image mirroring Copy linkLink copied to clipboard!
You can disable Ceph Block Device mirroring on images.
Prerequisites
- A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
- Root-level access to the node.
Procedure
To disable mirroring for a specific image:
Syntax
rbd mirror image disable POOL_NAME/IMAGE_NAME
rbd mirror image disable POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image disable data/image2
[root@rbd-client ~]# rbd mirror image disable data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example disables mirroring of the
image2
image in thedata
pool.
6.3.3.3. Getting mirroring status for a single image Copy linkLink copied to clipboard!
You can get the mirror status for an image by running the mirror image status
command.
Prerequisites
- A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
- Root-level access to the node.
Procedure
To get the status of a mirrored image:
Syntax
rbd mirror image status POOL_NAME/IMAGE_NAME
rbd mirror image status POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example gets the status of the
image2
image in thedata
pool.
6.3.3.4. Promoting an image Copy linkLink copied to clipboard!
You can promote or demote an image in a pool.
Prerequisites
- A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
- Root-level access to the node.
Procedure
To promote an image to primary:
Syntax
rbd mirror image promote POOL_NAME/IMAGE_NAME
rbd mirror image promote POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote data/image2
[root@rbd-client ~]# rbd mirror image promote data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example promotes
image2
in thedata
pool.Depending on which type of mirroring you are using, see either Recover from a disaster with one-way mirroring or Recover from a disaster with two-way mirroring for details.
Syntax
rbd mirror image promote --force POOL_NAME/IMAGE_NAME
rbd mirror image promote --force POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote --force data/image2
[root@rbd-client ~]# rbd mirror image promote --force data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use forced promotion when the demotion cannot be propagated to the peer Ceph storage cluster. For example, because of cluster failure or communication outage.
6.3.3.5. Demoting an image Copy linkLink copied to clipboard!
You can demote an image in a pool.
Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion.
Prerequisites
- A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
- Root-level access to the node.
Procedure
To demote an image to non-primary:
Syntax
rbd mirror image demote POOL_NAME/IMAGE_NAME
rbd mirror image demote POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image demote data/image2
[root@rbd-client ~]# rbd mirror image demote data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example demotes the
image2
image in thedata
pool.
6.3.3.6. Resynchronizing images Copy linkLink copied to clipboard!
You can re-synchronize an image if an inconsistency occurs between the two peer clusters. When an image enters an inconsistent state, the rbd-mirror
daemon skips mirroring for that image until the issue is resolved.
Prerequisites
- A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
- Root-level access to the node.
Procedure
To request a re-synchronization to the primary image:
Syntax
rbd mirror image resync POOL_NAME/IMAGE_NAME
rbd mirror image resync POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image resync data/image2
[root@rbd-client ~]# rbd mirror image resync data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example requests resynchronization of
image2
in thedata
pool.
6.3.4. Managing mirroring on a consistency group Copy linkLink copied to clipboard!
Learn to enable and disable mirroring on Ceph Block Device consistency group by using the command-line interface. You can also get the mirroring status for the group. Also, learn to promote, demote, and resynchronize the group. The commands in the following sections support the optional namespace
parameter and you can use it in the command as POOL_NAME/NAMESPACE/GROUP_NAME
.
You can mirror a maximum of 100 images in total per cluster and a maximum of 50 images per group.
6.3.4.1. Enabling mirroring on a group Copy linkLink copied to clipboard!
Enable mirroring for a group within the pool, by using the mirror group enable
command.
- Group mirroring works with image mode only.
- Consistency group mirroring only supports snapshot mirroring mode and journal mode is not supported.
Prerequisistes
- Make sure that you have root-level access to the node.
- Pool mirroring is enabled in the image mode. For more information, see step 5 in Configuring two-way Ceph device block mirroring.
Procedure
Run the following command to enable mirroring on a group.
Syntax
rbd mirror group enable _POOL_NAME_/_GROUP_NAME_
rbd mirror group enable _POOL_NAME_/_GROUP_NAME_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
The following example enables mirroring for the
test_group
group in thetest_pool
pool.rbd mirror group enable test_pool/test_group
[root@rbd-client ~]# rbd mirror group enable test_pool/test_group
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- After you enable mirroring for a group, you cannot add new images to that group. To add images, you must disable group mirroring and add the wanted images. You can then re-enable mirroring.
- Images in the group cannot be enabled for mirroring.
6.3.4.2. Disabling mirroring on a group Copy linkLink copied to clipboard!
Disable mirroring for a group within the pool, by using the mirror group disable
command.
Prerequisistes
- Make sure that you have root-level access to the node.
Procedure
Run the following command to disable mirroring on a group.
Syntax
rbd mirror group disable _POOL_NAME_/_GROUP_NAME_
rbd mirror group disable _POOL_NAME_/_GROUP_NAME_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
The following example disables mirroring for the
test_group
group in thetest_pool
pool.rbd mirror group disable test_pool/test_group
[root@rbd-client ~]# rbd mirror group disable test_pool/test_group
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.4.3. Getting mirroring status for a group Copy linkLink copied to clipboard!
You can get the mirror status for the group, by running the mirror group status
command.
Prerequisite
- Make sure that you have root-level access to the node.
Procedure
Run the following command to get the group mirroring status.
Syntax
rbd mirror group status _POOL_NAME_/_GROUP_NAME_
rbd mirror group status _POOL_NAME_/_GROUP_NAME_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
The following example gets the status of the
test_group
group in thetest_pool
pool.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.4.4. Promoting a group Copy linkLink copied to clipboard!
You can promote a group by using the command in the procedure.
Prerequisite
- A running Red Hat Ceph storage cluster with snapshot-based group mirroring is configured.
- Root-level access to the node.
Procedure
-
Promote a group to primary, by using the
mirror group promote
command.
Syntax
rbd mirror group promote _POOL_NAME_/_GROUP_NAME_
rbd mirror group promote _POOL_NAME_/_GROUP_NAME_
Example
The following example promotes the group test_group
in the test_pool
pool to primary.
rbd mirror group promote test_pool/test_group
[root@rbd-client ~]# rbd mirror group promote test_pool/test_group
Use forced promotion when the demotion cannot be propagated to the peer Ceph storage cluster because of cluster failure or communication outage.
Before executing the mirror group promote
command with --force
flag, ensure that the secondary mirror daemon is stopped. This step is a temporary workaround for known issues and may not be necessary once those issues are resolved.
Run the command to stop and kill the secondary mirror daemon.
ceph orch stop rbd-mirror kill -SIGKILL PID_OF_SECONDARY_RBD_MIRROR
ceph orch stop rbd-mirror kill -SIGKILL PID_OF_SECONDARY_RBD_MIRROR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Execute the
mirror group promote
command with the--force
flag and restart the mirror daemon after themirror group promote
command completes execution.rbd mirror group promote --force POOL_NAME/GROUP_NAME ceph orch start rbd-mirror
rbd mirror group promote --force POOL_NAME/GROUP_NAME ceph orch start rbd-mirror
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example
rbd mirror group promote --force test_pool/test_group
[root@rbd-client ~]# rbd mirror group promote --force test_pool/test_group
6.3.4.5. Demoting a group Copy linkLink copied to clipboard!
Demote a group by using the command in the procedure.
Prerequisite
- A running Red Hat Ceph storage cluster with snapshot-based group mirroring is configured.
- Root-level access to the node.
Procedure
- Demote a group to non-primary, by using the mirror group demote command.
Syntax
rbd mirror group demote POOL_NAME/GROUP_NAME
rbd mirror group demote POOL_NAME/GROUP_NAME
Example
The following example demotes the group test_group
image in the test_pool
pool to non-primary.
rbd mirror group demote test_pool/test_group
[root@rbd-client ~]# rbd mirror group demote test_pool/test_group
6.3.4.6. Resynchronizing a group Copy linkLink copied to clipboard!
You can resynchronize a group if an inconsistency occurs between the two peer clusters. When a group enters an inconsistent state, the rbd-mirror daemon skips mirroring for that group until the issue is resolved.
Resynchronize a group by using the command in the procedure.
Prerequisite
- Root-level access to the node.
Procedure
-
Resynchronize to the primary group, by using the
mirror group resync
command.
Syntax
rbd mirror group resync _POOL_NAME_/_GROUP_NAME_
rbd mirror group resync _POOL_NAME_/_GROUP_NAME_
Example
The following example requests resynchronzation of test_group
in the test_pool
pool.
rbd mirror group resync test_pool/test_group
[root@rbd-client ~]# rbd mirror group resync test_pool/test_group
6.3.5. Managing mirroring on a namespace Copy linkLink copied to clipboard!
You can enable and disable Ceph Block Device mirroring on namespaces by using the command-line interface.
Before configuring a namespace to mirror with a different remote namespace, mirroring must be disabled for both the local and remote namespaces on both the clusters.
6.3.5.1. Enabling namespace mirroring Copy linkLink copied to clipboard!
You can configure mirroring on a namespace in a pool. The pool must already be enabled for mirroring. Mirror the namespace to a namespace with the same or a different name in the remote pool from a remote cluster (secondary cluster).
You can mirror a namespace to a namespace with a different name in the remote pool using the --remote-namespace
option. The default behaviour of the namespace is to mirror to a namespace with the same name on the remote pool.
Prerequisites
- Root-level access to the node.
- 2 running Red Hat Ceph Storage cluster.
-
rbd-mirror
daemon service is enabled on both clusters. - Enable mirroring for the pool in which the namespace is added.
Procedure
Run the following command on both clusters where you want to enable mirroring on a namespace.
Syntax
rbd mirror pool enable POOL_NAME/LOCAL_NAMESPACE_NAME _MODE --remote-namespace REMOTE_NAMESPACE_NAME
rbd mirror pool enable POOL_NAME/LOCAL_NAMESPACE_NAME _MODE --remote-namespace REMOTE_NAMESPACE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
--remote-namespace
parameter is optional.The mirroring mode can either be
image
orpool
:-
Image mode: When configured in
image
mode, mirroring must beexplicitly enabled
on each image. Pool mode (default): When configured in
pool
mode, all images in the namespace with the journaling feature enabled are mirrored.Example
rbd mirror pool enable image-pool/namespace-a image --remote-namespace namespace-b rbd mirror pool enable image-pool/namespace-b image --remote-namespace namespace-a
[root@rbd-client ~]# rbd mirror pool enable image-pool/namespace-a image --remote-namespace namespace-b Remote cluster: [root@rbd-client ~]# rbd mirror pool enable image-pool/namespace-b image --remote-namespace namespace-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables image mode mirroring between image-pool/namespace-a on the first cluster and image-pool/namespace-b on the second cluster.
The namespace and remote namespace on the first cluster must match the remote namespace and namespace respectively on the remote cluster.
If the --remote-namespace
option is not provided, the namespace is mirrored to a namespace with the same name in the remote pool.
6.3.5.2. Disabling namespace mirroring Copy linkLink copied to clipboard!
You can disable Ceph Block Device mirroring on namespaces.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
-
Any mirror enabled images in the namespace must be explicitly disabled before disabling mirroring on the namespace if configured in
image
mode.
Procedure
Run the following command on both clusters where you want to disable mirroring on a namespace.
Syntax
rbd mirror pool disable POOL_NAME/NAMESPACE_NAME
rbd mirror pool disable POOL_NAME/NAMESPACE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool disable image-pool/namespace-a rbd mirror pool disable image-pool/namespace-b
[root@rbd-client ~]# rbd mirror pool disable image-pool/namespace-a [root@rbd-client ~]# rbd mirror pool disable image-pool/namespace-b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To enable a namespace with a different remote namespace, the namespace and the corresponding remote namespace on both clusters must be disabled for mirroring before they can be re-enabled.
6.3.6. Adding a storage cluster peer Copy linkLink copied to clipboard!
Add a storage cluster peer to enable the rbd-mirror
daemon to discover the remote cluster.
For example, to add the site-a
storage cluster as a peer to site-b
, perform the following steps from a client node in the site-b
cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
Register the peer to the pool:
Syntax
rbd --cluster CLUSTER_NAME mirror pool peer add POOL_NAME PEER_CLIENT_NAME@PEER_CLUSTER_NAME -n CLIENT_NAME
rbd --cluster CLUSTER_NAME mirror pool peer add POOL_NAME PEER_CLIENT_NAME@PEER_CLUSTER_NAME -n CLIENT_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b
[root@rbd-client ~]# rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.7. Removing a storage cluster peer Copy linkLink copied to clipboard!
Specify the peer UUID to remove a storage cluster peer.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
Specify the pool name and the peer Universally Unique Identifier (UUID).
Syntax
rbd mirror pool peer remove POOL_NAME PEER_UUID
rbd mirror pool peer remove POOL_NAME PEER_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror pool peer remove data 7e90b4ce-e36d-4f07-8cbc-42050896825d
[root@rbd-client ~]# rbd mirror pool peer remove data 7e90b4ce-e36d-4f07-8cbc-42050896825d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipTo view the peer UUID, use the
rbd mirror pool info
command.
6.3.8. Delaying block device replication Copy linkLink copied to clipboard!
Whether using one-way or two-way replication, you can configure delayed replication for RADOS Block Device (RBD) mirrored images. Delayed replication provides a buffer period, allowing you to revert unintended changes on the primary image before they are propagated to the secondary image.
Delaying block device replication is only applicable with journal-based mirroring.
To implement delayed replication, the rbd-mirror
daemon within the destination storage cluster should set the rbd_mirroring_replay_delay = MINIMUM_DELAY_IN_SECONDS
configuration option. This setting can either be applied globally within the ceph.conf
file utilized by the rbd-mirror
daemons, or on an individual image basis.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
To utilize delayed replication for a specific image, on the primary image, run the following
rbd
CLI command:Syntax
rbd image-meta set POOL_NAME/IMAGE_NAME conf_rbd_mirroring_replay_delay MINIMUM_DELAY_IN_SECONDS
rbd image-meta set POOL_NAME/IMAGE_NAME conf_rbd_mirroring_replay_delay MINIMUM_DELAY_IN_SECONDS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600
[root@rbd-client ~]# rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example sets a 10 minute minimum replication delay on image
vm-1
in thevms
pool.
6.3.9. Converting journal-based mirroring to snapshot-based mirrorring Copy linkLink copied to clipboard!
You can convert journal-based mirroring to snapshot-based mirroring by disabling mirroring and enabling snapshot.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@rbd-client ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable mirroring for a specific image within the pool:
Syntax
rbd mirror image disable POOL_NAME/IMAGE_NAME
rbd mirror image disable POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client /]# rbd mirror image disable mirror_pool/mirror_image Mirroring disabled
[ceph: root@rbd-client /]# rbd mirror image disable mirror_pool/mirror_image Mirroring disabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable snapshot-based mirroring for the image:
Syntax
rbd mirror image enable POOL_NAME/IMAGE_NAME snapshot
rbd mirror image enable POOL_NAME/IMAGE_NAME snapshot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client /]# rbd mirror image enable mirror_pool/mirror_image snapshot Mirroring enabled
[ceph: root@rbd-client /]# rbd mirror image enable mirror_pool/mirror_image snapshot Mirroring enabled
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables snapshot-based mirroring for the
mirror_image
image in themirror_pool
pool.
6.3.10. Creating an image mirror-snapshot Copy linkLink copied to clipboard!
Create a mirror snapshot to replicate changes in a Ceph Block Device image when using snapshot-based mirroring.
Prerequisites
- A minimum of two healthy running Red Hat Ceph Storage clusters.
- Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
- A CephX user with administrator-level capabilities.
- Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created.
By default, a maximum of 5 image mirror-snapshots are retained. The most recent image mirror-snapshot is automatically removed if the limit is reached. If required, the limit can be overridden through the rbd_mirroring_max_mirroring_snapshots
configuration. Image mirror-snapshots are automatically deleted when the image is removed or when mirroring is disabled.
Procedure
To create an image-mirror snapshot:
Syntax
rbd --cluster CLUSTER_NAME mirror image snapshot POOL_NAME/IMAGE_NAME
rbd --cluster CLUSTER_NAME mirror image snapshot POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image snapshot data/image1
[root@site-a ~]# rbd mirror image snapshot data/image1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details.
6.3.11. Scheduling mirror-snapshots Copy linkLink copied to clipboard!
Mirror-snapshots can be automatically created when mirror-snapshot schedules are defined. The mirror-snapshot can be scheduled globally, per-pool or per-image levels. Multiple mirror-snapshot schedules can be defined at any level but only the most specific snapshot schedules that match an individual mirrored image will run.
6.3.11.1. Creating a mirror-snapshot schedule Copy linkLink copied to clipboard!
You can create a mirror-snapshot schedule using the snapshot schedule
command.
Prerequisites
- A minimum of two healthy running Red Hat Ceph Storage clusters.
- Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
- A CephX user with administrator-level capabilities.
- Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.
Procedure
To create a mirror-snapshot schedule:
Syntax
rbd --cluster CLUSTER_NAME mirror snapshot schedule add --pool POOL_NAME --image IMAGE_NAME INTERVAL [START_TIME]
rbd --cluster CLUSTER_NAME mirror snapshot schedule add --pool POOL_NAME --image IMAGE_NAME INTERVAL [START_TIME]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The CLUSTER_NAME should be used only when the cluster name is different from the default name
ceph
. The interval can be specified in days, hours, or minutes using d, h, or m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format.Example
rbd mirror snapshot schedule add --pool data --image image1 6h
[root@site-a ~]# rbd mirror snapshot schedule add --pool data --image image1 6h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror snapshot schedule add --pool data --image image1 24h 14:00:00-05:00
[root@site-a ~]# rbd mirror snapshot schedule add --pool data --image image1 24h 14:00:00-05:00
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.11.2. Listing snapshot schedules by level Copy linkLink copied to clipboard!
You can list all snapshot schedules at a specific level.
Prerequisites
- A minimum of two running Red Hat Ceph Storage clusters.
- Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
- A CephX user with administrator-level capabilities.
- Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.
Procedure
To list all snapshot schedules for a specific global, pool or image level, with an optional pool or image name:
Syntax
rbd --cluster site-a mirror snapshot schedule ls --pool POOL_NAME --recursive
rbd --cluster site-a mirror snapshot schedule ls --pool POOL_NAME --recursive
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Additionally, the
--recursive
option can be specified to list all schedules at the specified level as shown below:Example
rbd mirror snapshot schedule ls --pool data --recursive
[root@rbd-client ~]# rbd mirror snapshot schedule ls --pool data --recursive POOL NAMESPACE IMAGE SCHEDULE data - - every 1d starting at 14:00:00-05:00 data - image1 every 6h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.11.3. Removing a mirror-snapshot schedule Copy linkLink copied to clipboard!
You can remove a mirror-snapshot schedule using the snapshot schedule remove
command.
Prerequisites
- A minimum of two healthy running Red Hat Ceph Storage clusters.
- Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
- A CephX user with administrator-level capabilities.
- Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.
Procedure
To remove a mirror-snapshot schedule:
Syntax
rbd --cluster CLUSTER_NAME mirror snapshot schedule remove --pool POOL_NAME --image IMAGE_NAME INTERVAL START_TIME
rbd --cluster CLUSTER_NAME mirror snapshot schedule remove --pool POOL_NAME --image IMAGE_NAME INTERVAL START_TIME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format.
Example
rbd mirror snapshot schedule remove --pool data --image image1 6h
[root@site-a ~]# rbd mirror snapshot schedule remove --pool data --image image1 6h
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror snapshot schedule remove --pool data --image image1 24h 14:00:00-05:00
[root@site-a ~]# rbd mirror snapshot schedule remove --pool data --image image1 24h 14:00:00-05:00
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.11.4. Viewing upcoming snapshot schedule Copy linkLink copied to clipboard!
You can view the status for the next snapshots to be created for snapshot-based mirroring RBD images.
Prerequisites
- A minimum of two healthy running Red Hat Ceph Storage clusters.
- Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
- A CephX user with administrator-level capabilities.
- Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.
Procedure
To view the status for the next snapshots to be created:
Syntax
rbd --cluster site-a mirror snapshot schedule status [--pool POOL_NAME] [--image IMAGE_NAME]
rbd --cluster site-a mirror snapshot schedule status [--pool POOL_NAME] [--image IMAGE_NAME]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror snapshot schedule status
[root@rbd-client ~]# rbd mirror snapshot schedule status SCHEDULE TIME IMAGE 2021-09-21 18:00:00 data/image1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Mirroring Ceph block devices section in the Red Hat Ceph Storage Block Device Guide for details.
6.3.12. Creating consistency group mirroring snapshot Copy linkLink copied to clipboard!
You can create a mirror group snapshot to replicate changes in a Ceph Block Device when using snapshot-based mirroring.
Prerequisites
- A minimum of two running IBM Storage Ceph clusters.
- Root-level access to the Ceph client nodes for the IBM Storage Ceph clusters.
- A CephX user with administrator-level capabilities.
- Access to the IBM Storage Ceph cluster where a snapshot mirror will be created.
Procedure
- Create a mirror group mirroring snapshot.
Syntax
rbd --cluster _CLUSTER_NAME_ mirror group snapshot _POOL_NAME_/_GROUP_NAME_
rbd --cluster _CLUSTER_NAME_ mirror group snapshot _POOL_NAME_/_GROUP_NAME_
Example
rbd --cluster site-a mirror group snapshot test_pool/test_group
[root@site-a ~]# rbd --cluster site-a mirror group snapshot test_pool/test_group
6.3.13. Scheduling consistency group mirror snapshots Copy linkLink copied to clipboard!
You can create, remove, list, and check the status of group mirror snapshot schedule by using the command-line interface.
Prerequisites
- A minimum of two running Red Hat Ceph Storage clusters.
- Root-level access to the Ceph client nodes for the IBM Storage Ceph clusters.
- A CephX user with administrator-level capabilities.
- Access to the Red Hat Ceph Storage cluster where creation of group mirror-snapshots need to be scheduled.
6.3.13.1. Creating a schedule Copy linkLink copied to clipboard!
You can create a group snapshot schedule, by using the mirror group snapshot schedule add
command.
Procedure
- Run the following command to create mirror snapshot schedule for group.
Syntax
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule add --pool _POOL_NAME_ --group _GROUP_NAME_ _INTERVAL_ [START_TIME]
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule add --pool _POOL_NAME_ --group _GROUP_NAME_ _INTERVAL_ [START_TIME]
Only use --cluster
when the cluster name is different from the default name, ceph.
Specify the interval in days, hours, or minutes using d, h, or m suffix, respectively. The optional START_TIME must be specified using the ISO 8601 (hh:mm:ss+|–hh:mm) time format.
Example
The following example creates a mirror group snapshot schedule with the default ceph cluster, every 6 hours, and does not specify a start time.
rbd mirror group snapshot schedule add --pool test_pool --group test_group 6h
[root@site-a ~]# rbd mirror group snapshot schedule add --pool test_pool --group test_group 6h
The following example creates a mirror group snapshot schedule with the default ceph cluster, every 24 hours, with a start time of 2:00 PM at -5 GMT.
rbd mirror group snapshot schedule add --pool test_pool --group test_group 24h 14:00:00-05:00
[root@site-a ~]# rbd mirror group snapshot schedule add --pool test_pool --group test_group 24h 14:00:00-05:00
6.3.13.2. Listing snapshot schedules by level Copy linkLink copied to clipboard!
You can list all snapshot schedules at the global, pool, namespace or group level by using the by using the mirror group snapshot schedule ls
command.
Procedure
- List all snapshot schedules at the global, pool, namespace or group level by running the following command.
Syntax
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule ls --pool _POOL_NAME_ --namespace _NAMESPACE_ --group _GROUP_NAME _--recursive
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule ls --pool _POOL_NAME_ --namespace _NAMESPACE_ --group _GROUP_NAME _--recursive
Use the --recursive
option to list all schedules at the specified level.
Example
# rbd --cluster site-a mirror group snapshot schedule ls --pool test_pool --recursive
[root@rbd-client ~]# # rbd --cluster site-a mirror group snapshot schedule ls --pool test_pool --recursive
POOL NAMESPACE GROUP SCHEDULE
test_pool test_group every 30m
- To list the mirror group snapshot schedule, run the following command.
Syntax
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule ls --pool _POOL_NAME_ --group _GROUP_NAME_ --recursive
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule ls --pool _POOL_NAME_ --group _GROUP_NAME_ --recursive
Example
rbd --cluster site-a mirror group snapshot schedule ls --pool test_pool --group test_group --recursive
[root@rbd-client ~]# rbd --cluster site-a mirror group snapshot schedule ls --pool test_pool --group test_group --recursive
POOL NAMESPACE GROUP SCHEDULE
test_pool test_group every 30m
6.3.13.3. Removing a schedule Copy linkLink copied to clipboard!
You can remove a group snapshot schedule, by using the mirror group snapshot schedule remove
command.
Procedure
- Run the following command to remove the snapshot schedule for a group.
Syntax
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule remove --pool _POOL_NAME_ --group _GROUP_NAME_ _INTERVAL_ _START_TIME_
rbd --cluster _CLUSTER_NAME_ mirror group snapshot schedule remove --pool _POOL_NAME_ --group _GROUP_NAME_ _INTERVAL_ _START_TIME_
Only use --cluster
when the cluster name is different from the default name, ceph.
Specify the interval in days, hours, or minutes using d, h, or m suffix, respectively. The optional START_TIME
must be specified using the ISO 8601 (hh:mm:ss+|–hh:mm) time format.
Example
The following example removes a snapshot schedule with the default ceph cluster, every 6 hours, and does not specify a start time.
rbd mirror group snapshot schedule remove --pool test_pool --group test_group 6h
[root@site-a ~]# rbd mirror group snapshot schedule remove --pool test_pool --group test_group 6h
The following example removes a snapshot schedule with the default ceph cluster, every 24 hours, with a start time of 2:00 PM at -5 GMT.
rbd mirror group snapshot schedule remove --pool test_pool --group test_group 24h 14:00:00-05:00
[root@site-a ~]# rbd mirror group snapshot schedule remove --pool test_pool --group test_group 24h 14:00:00-05:00
6.3.13.4. Viewing upcoming snapshot schedule Copy linkLink copied to clipboard!
You can view the status for the next snapshots to be created for snapshot-based mirroring Ceph Block Device consistency group.
Procedure
- View the status for the next snapshots to be created.
Syntax
rbd --cluster CLUSTER_NAME mirror group snapshot schedule status --pool _POOL_NAME_ --namespace _NAMESPACE_NAME_ --group _GROUP_NAME_
rbd --cluster CLUSTER_NAME mirror group snapshot schedule status --pool _POOL_NAME_ --namespace _NAMESPACE_NAME_ --group _GROUP_NAME_
Example
rbd mirror --cluster site-a mirror group snapshot schedule status --pool test_pool --group test_group
[root@rbd-client ~]# rbd mirror --cluster site-a mirror group snapshot schedule status --pool test_pool --group test_group
SCHEDULE TIME GROUP
2025-05-01 18:00:00 test_pool/test_group
6.4. Recover from a disaster Copy linkLink copied to clipboard!
As a storage administrator, you can be prepared for eventual hardware failure by knowing how to recover the data from another storage cluster where mirroring was configured.
In the examples, the primary storage cluster is known as the site-a
, and the secondary storage cluster is known as the site-b
. Additionally, the storage clusters both have a data
pool with two images, image1
and image2
.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- One-way or two-way mirroring was configured.
6.4.1. Disaster recovery Copy linkLink copied to clipboard!
Asynchronous replication of block data between two or more Red Hat Ceph Storage clusters reduces downtime and prevents data loss in the event of a significant data center failure. These failures have a widespread impact, also referred as a large blast radius, and can be caused by impacts to the power grid and natural disasters.
Customer data needs to be protected during these scenarios. Volumes must be replicated with consistency and efficiency and also within Recovery Point Objective (RPO) and Recovery Time Objective (RTO) targets. This solution is called a Wide Area Network- Disaster Recovery (WAN-DR).
In such scenarios it is hard to restore the primary system and the data center. The quickest way to recover is to failover the applications to an alternate Red Hat Ceph Storage cluster (disaster recovery site) and make the cluster operational with the latest copy of the data available. The solutions that are used to recover from these failure scenarios are guided by the application:
- Recovery Point Objective (RPO): The amount of data loss, an application tolerate in the worst case.
- Recovery Time Objective (RTO): The time taken to get the application back on line with the latest copy of the data available.
Additional Resources
- See the Mirroring Ceph block devices Chapter in the Red Hat Ceph Storage Block Device Guide for details.
- See the Encryption in transit section in the Red Hat Ceph Storage Data Security and Hardening Guide to know more about data transmission over the wire in an encrypted state.
6.4.2. Recover from a disaster with one-way mirroring Copy linkLink copied to clipboard!
To recover from a disaster when using one-way mirroring use the following procedures. They show how to fail over to the secondary cluster after the primary cluster terminates, and how to fail back. The shutdown can be orderly or non-orderly.
One-way mirroring supports multiple secondary sites. If you are using additional secondary clusters, choose one of the secondary clusters to fail over to. Synchronize from the same cluster during fail back.
6.4.3. Recover from a disaster with two-way mirroring Copy linkLink copied to clipboard!
To recover from a disaster when using two-way mirroring use the following procedures. They show how to fail over to the mirrored data on the secondary cluster after the primary cluster terminates, and how to failback. The shutdown can be orderly or non-orderly.
6.4.4. Failover after an orderly shutdown Copy linkLink copied to clipboard!
Failover to the secondary storage cluster after an orderly shutdown.
Prerequisites
- Minimum of two running Red Hat Ceph Storage clusters.
- Root-level access to the node.
- Pool mirroring or image mirroring configured with one-way mirroring.
Procedure
- Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image.
Demote the primary images located on the
site-a
cluster by running the following commands on a monitor node in thesite-a
cluster:Syntax
rbd mirror image demote POOL_NAME/IMAGE_NAME
rbd mirror image demote POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image demote data/image1 rbd mirror image demote data/image2
[root@rbd-client ~]# rbd mirror image demote data/image1 [root@rbd-client ~]# rbd mirror image demote data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Promote the non-primary images located on the
site-b
cluster by running the following commands on a monitor node in thesite-b
cluster:Syntax
rbd mirror image promote POOL_NAME/IMAGE_NAME
rbd mirror image promote POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote data/image1 rbd mirror image promote data/image2
[root@rbd-client ~]# rbd mirror image promote data/image1 [root@rbd-client ~]# rbd mirror image promote data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After some time, check the status of the images from a monitor node in the
site-b
cluster. They should show a state ofup+stopped
and be listed as primary:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Resume the access to the images. This step depends on which clients use the image.
Additional Resources
- See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide.
6.4.5. Failover after a non-orderly shutdown Copy linkLink copied to clipboard!
Failover to secondary storage cluster after a non-orderly shutdown.
Prerequisites
- Minimum of two running Red Hat Ceph Storage clusters.
- Root-level access to the node.
- Pool mirroring or image mirroring configured with one-way mirroring.
Procedure
- Verify that the primary storage cluster is down.
- Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image.
Promote the non-primary images from a Ceph Monitor node in the
site-b
storage cluster. Use the--force
option, because the demotion cannot be propagated to thesite-a
storage cluster:Syntax
rbd mirror image promote --force POOL_NAME/IMAGE_NAME
rbd mirror image promote --force POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote --force data/image1 rbd mirror image promote --force data/image2
[root@rbd-client ~]# rbd mirror image promote --force data/image1 [root@rbd-client ~]# rbd mirror image promote --force data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the images from a Ceph Monitor node in the
site-b
storage cluster. They should show a state ofup+stopping_replay
. The description should sayforce promoted
, meaning it is in the intermittent state. Wait until the state comes toup+stopped
to validate the site is successfully promoted.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Block Storage and Volumes chapter in the Red Hat OpenStack Platform Storage Guide.
6.4.6. Prepare for fail back Copy linkLink copied to clipboard!
If two storage clusters were originally configured only for one-way mirroring, in order to fail back, configure the primary storage cluster for mirroring in order to replicate the images in the opposite direction.
During failback scenario, the existing peer that is inaccessible must be removed before adding a new peer to an existing cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the client node.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@rbd-client ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the
site-a
storage cluster , run the following command:Example
[ceph: root@rbd-client /]# ceph orch apply rbd-mirror --placement=host01
[ceph: root@rbd-client /]# ceph orch apply rbd-mirror --placement=host01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any inaccessible peers.
ImportantThis step must be run on the peer site which is up and running.
NoteMultiple peers are supported only for one way mirroring.
Get the pool UUID:
Syntax
rbd mirror pool info POOL_NAME
rbd mirror pool info POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# rbd mirror pool info pool_failback
[ceph: root@host01 /]# rbd mirror pool info pool_failback
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the inaccessible peer:
Syntax
rbd mirror pool peer remove POOL_NAME PEER_UUID
rbd mirror pool peer remove POOL_NAME PEER_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# rbd mirror pool peer remove pool_failback f055bb88-6253-4041-923d-08c4ecbe799a
[ceph: root@host01 /]# rbd mirror pool peer remove pool_failback f055bb88-6253-4041-923d-08c4ecbe799a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a block device pool with a name same as its peer mirror pool.
To create an rbd pool, execute the following:
Syntax
ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create pool1 ceph osd pool application enable pool1 rbd rbd pool init -p pool1
[root@rbd-client ~]# ceph osd pool create pool1 [root@rbd-client ~]# ceph osd pool application enable pool1 rbd [root@rbd-client ~]# rbd pool init -p pool1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
On a Ceph client node, bootstrap the storage cluster peers.
Create Ceph user accounts, and register the storage cluster peer to the pool:
Syntax
rbd mirror pool peer bootstrap create --site-name LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN
rbd mirror pool peer bootstrap create --site-name LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
[ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis example bootstrap command creates the
client.rbd-mirror.site-a
and theclient.rbd-mirror-peer
Ceph users.-
Copy the bootstrap token file to the
site-b
storage cluster. Import the bootstrap token on the
site-b
storage cluster:Syntax
rbd mirror pool peer bootstrap import --site-name LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN
rbd mirror pool peer bootstrap import --site-name LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a
[ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor one-way RBD mirroring, you must use the
--direction rx-only
argument, as two-way mirroring is the default when bootstrapping peers.
From a monitor node in the
site-a
storage cluster, verify thesite-b
storage cluster was successfully added as a peer:Example
[ceph: root@rbd-client /]# rbd mirror pool info -p data Mode: image Peers: UUID NAME CLIENT d2ae0594-a43b-4c67-a167-a36c646e8643 site-b client.site-b
[ceph: root@rbd-client /]# rbd mirror pool info -p data Mode: image Peers: UUID NAME CLIENT d2ae0594-a43b-4c67-a167-a36c646e8643 site-b client.site-b
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- For detailed information, see the User Management chapter in the Red Hat Ceph Storage Administration Guide.
6.4.6.1. Fail back to the primary storage cluster Copy linkLink copied to clipboard!
When the formerly primary storage cluster recovers, fail back to the primary storage cluster.
If you have scheduled snapshots at the image level, then you need to re-add the schedule as image resync operations changes the RBD Image ID and the previous schedule becomes obsolete.
Prerequisites
- Minimum of two running Red Hat Ceph Storage clusters.
- Root-level access to the node.
- Pool mirroring or image mirroring configured with one-way mirroring.
Procedure
Check the status of the images from a monitor node in the
site-b
cluster again. They should show a state ofup-stopped
and the description should saylocal image is primary
:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From a Ceph Monitor node on the
site-a
storage cluster determine if the images are still primary:Syntax
rbd mirror pool info POOL_NAME/IMAGE_NAME
rbd mirror pool info POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd info data/image1 rbd info data/image2
[root@rbd-client ~]# rbd info data/image1 [root@rbd-client ~]# rbd info data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output from the commands, look for
mirroring primary: true
ormirroring primary: false
, to determine the state.Demote any images that are listed as primary by running a command like the following from a Ceph Monitor node in the
site-a
storage cluster:Syntax
rbd mirror image demote POOL_NAME/IMAGE_NAME
rbd mirror image demote POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image demote data/image1
[root@rbd-client ~]# rbd mirror image demote data/image1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Resynchronize the images ONLY if there was a non-orderly shutdown. Run the following commands on a monitor node in the
site-a
storage cluster to resynchronize the images fromsite-b
tosite-a
:Syntax
rbd mirror image resync POOL_NAME/IMAGE_NAME
rbd mirror image resync POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image resync data/image1 rbd mirror image resync data/image2
[root@rbd-client ~]# rbd mirror image resync data/image1 Flagged image for resync from primary [root@rbd-client ~]# rbd mirror image resync data/image2 Flagged image for resync from primary
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After some time, ensure resynchronization of the images is complete by verifying they are in the
up+replaying
state. Check their state by running the following commands on a monitor node in thesite-a
storage cluster:Syntax
rbd mirror image status POOL_NAME/IMAGE_NAME
rbd mirror image status POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image status data/image1 rbd mirror image status data/image2
[root@rbd-client ~]# rbd mirror image status data/image1 [root@rbd-client ~]# rbd mirror image status data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Demote the images on the
site-b
storage cluster by running the following commands on a Ceph Monitor node in thesite-b
storage cluster:Syntax
rbd mirror image demote POOL_NAME/IMAGE_NAME
rbd mirror image demote POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image demote data/image1 rbd mirror image demote data/image2
[root@rbd-client ~]# rbd mirror image demote data/image1 [root@rbd-client ~]# rbd mirror image demote data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf there are multiple secondary storage clusters, this only needs to be done from the secondary storage cluster where it was promoted.
Promote the formerly primary images located on the
site-a
storage cluster by running the following commands on a Ceph Monitor node in thesite-a
storage cluster:Syntax
rbd mirror image promote POOL_NAME/IMAGE_NAME
rbd mirror image promote POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
rbd mirror image promote data/image1 rbd mirror image promote data/image2
[root@rbd-client ~]# rbd mirror image promote data/image1 [root@rbd-client ~]# rbd mirror image promote data/image2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the images from a Ceph Monitor node in the
site-a
storage cluster. They should show a status ofup+stopped
and the description should saylocal image is primary
:Syntax
rbd mirror image status POOL_NAME/IMAGE_NAME
rbd mirror image status POOL_NAME/IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.4.7. Remove two-way mirroring Copy linkLink copied to clipboard!
After fail back is complete, you can remove two-way mirroring and disable the Ceph block device mirroring service.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the node.
Procedure
Remove the
site-b
storage cluster as a peer from thesite-a
storage cluster:Example
rbd mirror pool peer remove data client.remote@remote --cluster local rbd --cluster site-a mirror pool peer remove data client.site-b@site-b -n client.site-a
[root@rbd-client ~]# rbd mirror pool peer remove data client.remote@remote --cluster local [root@rbd-client ~]# rbd --cluster site-a mirror pool peer remove data client.site-b@site-b -n client.site-a
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop and disable the
rbd-mirror
daemon on thesite-a
client:Syntax
systemctl stop ceph-rbd-mirror@CLIENT_ID systemctl disable ceph-rbd-mirror@CLIENT_ID systemctl disable ceph-rbd-mirror.target
systemctl stop ceph-rbd-mirror@CLIENT_ID systemctl disable ceph-rbd-mirror@CLIENT_ID systemctl disable ceph-rbd-mirror.target
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
systemctl stop ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror@site-a systemctl disable ceph-rbd-mirror.target
[root@rbd-client ~]# systemctl stop ceph-rbd-mirror@site-a [root@rbd-client ~]# systemctl disable ceph-rbd-mirror@site-a [root@rbd-client ~]# systemctl disable ceph-rbd-mirror.target
Copy to Clipboard Copied! Toggle word wrap Toggle overflow