Questo contenuto non è disponibile nella lingua selezionata.

Chapter 6. Mirroring Ceph block devices


As a storage administrator, you can add another layer of redundancy to Ceph block devices by mirroring data images between Red Hat Ceph Storage clusters. Understanding and using Ceph block device mirroring can provide you protection against data loss, such as a site failure. There are two configurations for mirroring Ceph block devices, one-way mirroring or two-way mirroring, and you can configure mirroring on pools and individual images.

Prerequisites

  • A minimum of two healthy running Red Hat Ceph Storage clusters.
  • Network connectivity between the two storage clusters.
  • Access to a Ceph client node for each Red Hat Ceph Storage cluster.
  • A CephX user with administrator-level capabilities.

6.1. Ceph block device mirroring

RADOS Block Device (RBD) mirroring is a process of asynchronous replication of Ceph block device images between two or more Ceph storage clusters. By locating a Ceph storage cluster in different geographic locations, RBD Mirroring can help you recover from a site disaster. Journal-based Ceph block device mirroring ensures point-in-time consistent replicas of all changes to an image, including reads and writes, block device resizing, snapshots, clones and flattening.

RBD mirroring uses exclusive locks and the journaling feature to record all modifications to an image in the order in which they occur. This ensures that a crash-consistent mirror of an image is available.

Important

The CRUSH hierarchies supporting primary and secondary pools that mirror block device images must have the same capacity and performance characteristics, and must have adequate bandwidth to ensure mirroring without excess latency. For example, if you have X MB/s average write throughput to images in the primary storage cluster, the network must support N * X throughput in the network connection to the secondary site plus a safety factor of Y% to mirror N images.

The rbd-mirror daemon is responsible for synchronizing images from one Ceph storage cluster to another Ceph storage cluster by pulling changes from the remote primary image and writes those changes to the local, non-primary image. The rbd-mirror daemon can run either on a single Ceph storage cluster for one-way mirroring or on two Ceph storage clusters for two-way mirroring that participate in the mirroring relationship.

For RBD mirroring to work, either using one-way or two-way replication, a couple of assumptions are made:

  • A pool with the same name exists on both storage clusters.
  • A pool contains journal-enabled images you want to mirror.
Important

In one-way or two-way replication, each instance of rbd-mirror must be able to connect to the other Ceph storage cluster simultaneously. Additionally, the network must have sufficient bandwidth between the two data center sites to handle mirroring.

One-way Replication

One-way mirroring implies that a primary image or pool of images in one storage cluster gets replicated to a secondary storage cluster. One-way mirroring also supports replicating to multiple secondary storage clusters.

On the secondary storage cluster, the image is the non-primary replicate; that is, Ceph clients cannot write to the image. When data is mirrored from a primary storage cluster to a secondary storage cluster, the rbd-mirror runs ONLY on the secondary storage cluster.

For one-way mirroring to work, a couple of assumptions are made:

  • You have two Ceph storage clusters and you want to replicate images from a primary storage cluster to a secondary storage cluster.
  • The secondary storage cluster has a Ceph client node attached to it running the rbd-mirror daemon. The rbd-mirror daemon will connect to the primary storage cluster to sync images to the secondary storage cluster.

Figure 6.1. One-way mirroring

One-way mirroring

Two-way Replication

Two-way replication adds an rbd-mirror daemon on the primary cluster so images can be demoted on it and promoted on the secondary cluster. Changes can then be made to the images on the secondary cluster and they will be replicated in the reverse direction, from secondary to primary. Both clusters must have rbd-mirror running to allow promoting and demoting images on either cluster. Currently, two-way replication is only supported between two sites.

For two-way mirroring to work, a couple of assumptions are made:

  • You have two storage clusters and you want to be able to replicate images between them in either direction.
  • Both storage clusters have a client node attached to them running the rbd-mirror daemon. The rbd-mirror daemon running on the secondary storage cluster will connect to the primary storage cluster to synchronize images to secondary, and the rbd-mirror daemon running on the primary storage cluster will connect to the secondary storage cluster to synchronize images to primary.

Figure 6.2. Two-way mirroring

Two-way mirroring

Mirroring Modes

Mirroring is configured on a per-pool basis with mirror peering storage clusters. Ceph supports two mirroring modes, depending on the type of images in the pool.

Pool Mode
All images in a pool with the journaling feature enabled are mirrored.
Image Mode
Only a specific subset of images within a pool are mirrored. You must enable mirroring for each image separately.

Image States

Whether or not an image can be modified depends on its state:

  • Images in the primary state can be modified.
  • Images in the non-primary state cannot be modified.

Images are automatically promoted to primary when mirroring is first enabled on an image. The promotion can happen:

  • Implicitly by enabling mirroring in pool mode.
  • Explicitly by enabling mirroring of a specific image.

It is possible to demote primary images and promote non-primary images.

Additional Resources

6.1.1. An overview of journal-based and snapshot-based mirroring

RADOS Block Device (RBD) images can be asynchronously mirrored between two Red Hat Ceph Storage clusters through two modes:

Journal-based mirroring

This mode uses the RBD journaling image feature to ensure point-in-time and crash consistent replication between two Red Hat Ceph Storage clusters. The actual image is not modified until every write to the RBD image is first recorded to the associated journal. The remote cluster reads from this journal and replays the updates to its local copy of the image. Because each write to the RBD images results in two writes to the Ceph cluster, write latencies nearly double with the usage of the RBD journaling image feature.

Snapshot-based mirroring

This mode uses periodic scheduled or manually created RBD image mirror snapshots to replicate crash consistent RBD images between two Red Hat Ceph Storage clusters. The remote cluster determines any data or metadata updates between two mirror snapshots and copies the deltas to its local copy of the image. The RBD fast-diff image feature enables the quick determination of updated data blocks without the need to scan the full RBD image. The complete delta between two snapshots needs to be synchronized prior to use during a failover scenario. Any partially applied set of deltas are rolled back at moment of failover.

6.2. Configuring one-way mirroring using the command-line interface

This procedure configures one-way replication of a pool from the primary storage cluster to a secondary storage cluster.

Note

When using one-way replication you can mirror to multiple secondary storage clusters.

Note

Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a, and the secondary storage cluster you are replicating the images to, as site-b. The pool name used in these examples is called data.

Prerequisites

  • A minimum of two healthy and running Red Hat Ceph Storage clusters.
  • Root-level access to a Ceph client node for each storage cluster.
  • A CephX user with administrator-level capabilities.

Procedure

  1. Log into the cephadm shell on both the sites:

    Example

    [root@site-a ~]# cephadm shell
    [root@site-b ~]# cephadm shell

  2. On site-b, schedule the deployment of mirror daemon on the secondary cluster:

    Syntax

    ceph orch apply rbd-mirror --placement=NODENAME

    Example

    [ceph: root@site-b /]# ceph orch apply rbd-mirror --placement=host04

    Note

    The nodename is the host where you want to configure mirroring in the secondary cluster.

  3. Enable journaling features on an image on site-a.

    1. For new images, use the --image-feature option:

      Syntax

      rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE

      Example

      [ceph: root@site-a /]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling

      Note

      If exclusive-lock is already enabled, use journaling as the only argument, else it returns the following error:

      one or more requested features are already enabled
      (22) Invalid argument
    2. For existing images, use the rbd feature enable command:

      Syntax

      rbd feature enable POOL_NAME/IMAGE_NAME FEATURE, FEATURE

      Example

      [ceph: root@site-a /]# rbd feature enable data/image1 exclusive-lock, journaling

    3. To enable journaling on all new images by default, set the configuration parameter using ceph config set command:

      Example

      [ceph: root@site-a /]# ceph config set global rbd_default_features 125
      [ceph: root@site-a /]# ceph config show mon.host01 rbd_default_features

  4. Choose the mirroring mode, either pool or image mode, on both the storage clusters.

    1. Enabling pool mode:

      Syntax

      rbd mirror pool enable POOL_NAME MODE

      Example

      [ceph: root@site-a /]# rbd mirror pool enable data pool
      [ceph: root@site-b /]# rbd mirror pool enable data pool

      This example enables mirroring of the whole pool named data.

    2. Enabling image mode:

      Syntax

      rbd mirror pool enable POOL_NAME MODE

      Example

      [ceph: root@site-a /]# rbd mirror pool enable data image
      [ceph: root@site-b /]# rbd mirror pool enable data image

      This example enables image mode mirroring on the pool named data.

      Note

      To enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details.

    3. Verify that mirroring has been successfully enabled at both the sites:

      Syntax

      rbd mirror pool info POOL_NAME

      Example

      [ceph: root@site-a /]# rbd mirror pool info data
      Mode: pool
      Site Name: c13d8065-b33d-4cb5-b35f-127a02768e7f
      
      Peer Sites: none
      
      [ceph: root@site-b /]# rbd mirror pool info data
      Mode: pool
      Site Name: a4c667e2-b635-47ad-b462-6faeeee78df7
      
      Peer Sites: none

  5. On a Ceph client node, bootstrap the storage cluster peers.

    1. Create Ceph user accounts, and register the storage cluster peer to the pool:

      Syntax

      rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN

      Example

      [ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a

      Note

      This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users.

    2. Copy the bootstrap token file to the site-b storage cluster.
    3. Import the bootstrap token on the site-b storage cluster:

      Syntax

      rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN

      Example

      [ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a

      Note

      For one-way RBD mirroring, you must use the --direction rx-only argument, as two-way mirroring is the default when bootstrapping peers.

  6. To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites:

    Syntax

    rbd mirror image status POOL_NAME/IMAGE_NAME

    Example

    [ceph: root@mon-site-a /]# rbd mirror image status data/image1
    image1:
      global_id:   c13d8065-b33d-4cb5-b35f-127a02768e7f
      state:       up+stopped
      description: remote image is non-primary
      service:     host03.yuoosv on host03
      last_update: 2021-10-06 09:13:58

    Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster.

    Example

    [ceph: root@mon-site-b /]# rbd mirror image status data/image1
    image1:
      global_id:   c13d8065-b33d-4cb5-b35f-127a02768e7f

Additional Resources

  • See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details.
  • See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users.

6.3. Configuring two-way mirroring using the command-line interface

This procedure configures two-way replication of a pool between the primary storage cluster, and a secondary storage cluster.

Note

When using two-way replication you can only mirror between two storage clusters.

Note

Examples in this section will distinguish between two storage clusters by referring to the primary storage cluster with the primary images as site-a, and the secondary storage cluster you are replicating the images to, as site-b. The pool name used in these examples is called data.

Prerequisites

  • A minimum of two healthy and running Red Hat Ceph Storage clusters.
  • Root-level access to a Ceph client node for each storage cluster.
  • A CephX user with administrator-level capabilities.

Procedure

  1. Log into the cephadm shell on both the sites:

    Example

    [root@site-a ~]# cephadm shell
    [root@site-b ~]# cephadm shell

  2. On the site-a primary cluster, run the following command:

    Example

    [ceph: root@site-a /]# ceph orch apply rbd-mirror --placement=host01

    Note

    The nodename is the host where you want to configure mirroring.

  3. On site-b, schedule the deployment of mirror daemon on the secondary cluster:

    Syntax

    ceph orch apply rbd-mirror --placement=NODENAME

    Example

    [ceph: root@site-b /]# ceph orch apply rbd-mirror --placement=host04

    Note

    The nodename is the host where you want to configure mirroring in the secondary cluster.

  4. Enable journaling features on an image on site-a.

    1. For new images, use the --image-feature option:

      Syntax

      rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME --image-feature FEATURE FEATURE

      Example

      [ceph: root@site-a /]# rbd create image1 --size 1024 --pool data --image-feature exclusive-lock,journaling

      Note

      If exclusive-lock is already enabled, use journaling as the only argument, else it returns the following error:

      one or more requested features are already enabled
      (22) Invalid argument
    2. For existing images, use the rbd feature enable command:

      Syntax

      rbd feature enable POOL_NAME/IMAGE_NAME FEATURE, FEATURE

      Example

      [ceph: root@site-a /]# rbd feature enable data/image1 exclusive-lock, journaling

    3. To enable journaling on all new images by default, set the configuration parameter using ceph config set command:

      Example

      [ceph: root@site-a /]# ceph config set global rbd_default_features 125
      [ceph: root@site-a /]# ceph config show mon.host01 rbd_default_features

  5. Choose the mirroring mode, either pool or image mode, on both the storage clusters.

    1. Enabling pool mode:

      Syntax

      rbd mirror pool enable POOL_NAME MODE

      Example

      [ceph: root@site-a /]# rbd mirror pool enable data pool
      [ceph: root@site-b /]# rbd mirror pool enable data pool

      This example enables mirroring of the whole pool named data.

    2. Enabling image mode:

      Syntax

      rbd mirror pool enable POOL_NAME MODE

      Example

      [ceph: root@site-a /]# rbd mirror pool enable data image
      [ceph: root@site-b /]# rbd mirror pool enable data image

      This example enables image mode mirroring on the pool named data.

      Note

      To enable mirroring on specific images in a pool, see the Enabling image mirroring section in the Red Hat Ceph Storage Block Device Guide for more details.

    3. Verify that mirroring has been successfully enabled at both the sites:

      Syntax

      rbd mirror pool info POOL_NAME

      Example

      [ceph: root@site-a /]# rbd mirror pool info data
      Mode: pool
      Site Name: c13d8065-b33d-4cb5-b35f-127a02768e7f
      
      Peer Sites: none
      
      [ceph: root@site-b /]# rbd mirror pool info data
      Mode: pool
      Site Name: a4c667e2-b635-47ad-b462-6faeeee78df7
      
      Peer Sites: none

  6. On a Ceph client node, bootstrap the storage cluster peers.

    1. Create Ceph user accounts, and register the storage cluster peer to the pool:

      Syntax

      rbd mirror pool peer bootstrap create --site-name PRIMARY_LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN

      Example

      [ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a

      Note

      This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users.

    2. Copy the bootstrap token file to the site-b storage cluster.
    3. Import the bootstrap token on the site-b storage cluster:

      Syntax

      rbd mirror pool peer bootstrap import --site-name SECONDARY_LOCAL_SITE_NAME --direction rx-tx POOL_NAME PATH_TO_BOOTSTRAP_TOKEN

      Example

      [ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-tx data /root/bootstrap_token_site-a

      Note

      The --direction argument is optional, as two-way mirroring is the default when bootstrapping peers.

  7. To verify the mirroring status, run the following command from a Ceph Monitor node on the primary and secondary sites:

    Syntax

    rbd mirror image status POOL_NAME/IMAGE_NAME

    Example

    [ceph: root@mon-site-a /]# rbd mirror image status data/image1
    image1:
      global_id:   a4c667e2-b635-47ad-b462-6faeeee78df7
      state:       up+stopped
      description: local image is primary
      service:     host03.glsdbv on host03.ceph.redhat.com
      last_update: 2021-09-16 10:55:58
      peer_sites:
        name: a
        state: up+stopped
        description: replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0,"non_primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1},"primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}}
        last_update: 2021-09-16 10:55:50

    Here, up means the rbd-mirror daemon is running, and stopped means this image is not the target for replication from another storage cluster. This is because the image is primary on this storage cluster.

    Example

    [ceph: root@mon-site-b /]# rbd mirror image status data/image1
    image1:
      global_id:   a4c667e2-b635-47ad-b462-6faeeee78df7
      state:       up+replaying
      description: replaying, {"bytes_per_second":0.0,"entries_behind_primary":0,"entries_per_second":0.0,"non_primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1},"primary_position":{"entry_tid":3,"object_number":3,"tag_tid":1}}
      service:     host05.dtisty on host05
      last_update: 2021-09-16 10:57:20
      peer_sites:
        name: b
        state: up+stopped
        description: local image is primary
        last_update: 2021-09-16 10:57:28

    If images are in the state up+replaying, then mirroring is functioning properly. Here, up means the rbd-mirror daemon is running, and replaying means this image is the target for replication from another storage cluster.

    Note

    Depending on the connection between the sites, mirroring can take a long time to sync the images.

Additional Resources

  • See the Ceph block device mirroring section in the Red Hat Ceph Storage Block Device Guide for more details.
  • See the User Management section in the Red Hat Ceph Storage Administration Guide for more details on Ceph users.

6.4. Administration for mirroring Ceph block devices

As a storage administrator, you can do various tasks to help you manage the Ceph block device mirroring environment. You can do the following tasks:

  • Viewing information about storage cluster peers.
  • Add or remove a storage cluster peer.
  • Getting mirroring status for a pool or image.
  • Enabling mirroring on a pool or image.
  • Disabling mirroring on a pool or image.
  • Delaying block device replication.
  • Promoting and demoting an image.

Prerequisites

  • A minimum of two healthy running Red Hat Ceph Storage cluster.
  • Root-level access to the Ceph client nodes.
  • A one-way or two-way Ceph block device mirroring relationship.
  • A CephX user with administrator-level capabilities.

6.4.1. Viewing information about peers

View information about storage cluster peers.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To view information about the peers:

    Syntax

    rbd mirror pool info POOL_NAME

    Example

    [root@rbd-client ~]# rbd mirror pool info data
    Mode: pool
    Site Name: a
    
    Peer Sites:
    
    UUID: 950ddadf-f995-47b7-9416-b9bb233f66e3
    Name: b
    Mirror UUID: 4696cd9d-1466-4f98-a97a-3748b6b722b3
    Direction: rx-tx
    Client: client.rbd-mirror-peer

6.4.2. Enabling mirroring on a pool

Enable mirroring on a pool by running the following commands on both peer clusters.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To enable mirroring on a pool:

    Syntax

    rbd mirror pool enable POOL_NAME MODE

    Example

    [root@rbd-client ~]# rbd mirror pool enable data pool

    This example enables mirroring of the whole pool named data.

    Example

    [root@rbd-client ~]# rbd mirror pool enable data image

    This example enables image mode mirroring on the pool named data.

Additional Resources

6.4.3. Disabling mirroring on a pool

Before disabling mirroring, remove the peer clusters.

Note

When you disable mirroring on a pool, you also disable it on any images within the pool for which mirroring was enabled separately in image mode.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To disable mirroring on a pool:

    Syntax

    rbd mirror pool disable POOL_NAME

    Example

    [root@rbd-client ~]# rbd mirror pool disable data

    This example disables mirroring of a pool named data.

6.4.4. Enabling namespace mirroring

You can configure mirroring on a namespace in a pool. The pool must already be enabled for mirroring. Mirror the namespace to a namespace with the same or a different name in the remote pool from a remote cluster (secondary cluster).

You can mirror a namespace to a namespace with a different name in the remote pool using the --remote-namespace option. The default behaviour of the namespace is to mirror to a namespace with the same name on the remote pool.

Prerequisites

  • Root-level access to the node.
  • 2 running Red Hat Ceph Storage cluster.
  • rbd-mirror daemon service is enabled on both clusters.
  • Enable mirroring for the pool in which the namespace is added.

Procedure

  • Run the following command on both clusters where you want to enable mirroring on a namespace.

    Syntax

    rbd mirror pool enable POOL_NAME/LOCAL_NAMESPACE_NAME _MODE --remote-namespace REMOTE_NAMESPACE_NAME

    Note

    The --remote-namespace parameter is optional.

    The mirroring mode can either be image or pool:

  • Image mode: When configured in image mode, mirroring must be explicitly enabled on each image.
  • Pool mode (default): When configured in pool mode, all images in the namespace with the journaling feature enabled are mirrored.

    Example

    [root@rbd-client ~]# rbd mirror pool enable image-pool/namespace-a image --remote-namespace namespace-b
    
    Remote cluster:
    [root@rbd-client ~]#  rbd mirror pool enable image-pool/namespace-b image --remote-namespace namespace-a

    This example enables image mode mirroring between image-pool/namespace-a on the first cluster and image-pool/namespace-b on the second cluster.

The namespace and remote namespace on the first cluster must match the remote namespace and namespace respectively on the remote cluster.

Note

If the --remote-namespace option is not provided, the namespace is mirrored to a namespace with the same name in the remote pool.

6.4.5. Disabling namespace mirroring

You can disable Ceph Block Device mirroring on namespaces.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.
  • Any mirror enabled images in the namespace must be explicitly disabled before disabling mirroring on the namespace if configured in image mode.

Procedure

  • Run the following command on both clusters where you want to disable mirroring on a namespace.

    Syntax

    rbd mirror pool disable POOL_NAME/NAMESPACE_NAME

    Example

    [root@rbd-client ~]# rbd mirror pool disable image-pool/namespace-a
    [root@rbd-client ~]# rbd mirror pool disable image-pool/namespace-b

To enable a namespace with a different remote namespace, the namespace and the corresponding remote namespace on both clusters must be disabled for mirroring before they can be re-enabled.

6.4.6. Enabling image mirroring

Enable mirroring on the whole pool in image mode on both peer storage clusters.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. Enable mirroring for a specific image within the pool:

    Syntax

    rbd mirror image enable POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image enable data/image2

    This example enables mirroring for the image2 image in the data pool.

Additional Resources

6.4.7. Disabling image mirroring

You can disable Ceph Block Device mirroring on images.

Prerequisites

  • A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
  • Root-level access to the node.

Procedure

  1. To disable mirroring for a specific image:

    Syntax

    rbd mirror image disable POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image disable data/image2

    This example disables mirroring of the image2 image in the data pool.

Additional Resources

6.4.8. Image promotion and demotion

You can promote or demote an image in a pool.

Note

Do not force promote non-primary images that are still syncing, because the images will not be valid after the promotion.

Prerequisites

  • A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
  • Root-level access to the node.

Procedure

  1. To demote an image to non-primary:

    Syntax

    rbd mirror image demote POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image demote data/image2

    This example demotes the image2 image in the data pool.

  2. To promote an image to primary:

    Syntax

    rbd mirror image promote POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image promote data/image2

    This example promotes image2 in the data pool.

    Depending on which type of mirroring you are using, see either Recover from a disaster with one-way mirroring or Recover from a disaster with two-way mirroring for details.

    Syntax

    rbd mirror image promote --force POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image promote --force data/image2

    Use forced promotion when the demotion cannot be propagated to the peer Ceph storage cluster. For example, because of cluster failure or communication outage.

Additional Resources

6.4.9. Image resynchronization

You can re-synchronize an image. In case of an inconsistent state between the two peer clusters, the rbd-mirror daemon does not attempt to mirror the image that is causing the inconsistency.

Prerequisites

  • A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
  • Root-level access to the node.

Procedure

  1. To request a re-synchronization to the primary image:

    Syntax

    rbd mirror image resync POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image resync data/image2

    This example requests resynchronization of image2 in the data pool.

Additional Resources

6.4.10. Adding a storage cluster peer

Add a storage cluster peer for the rbd-mirror daemon to discover its peer storage cluster. For example, to add the site-a storage cluster as a peer to the site-b storage cluster, then follow this procedure from the client node in the site-b storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. Register the peer to the pool:

    Syntax

    rbd --cluster CLUSTER_NAME mirror pool peer add POOL_NAME PEER_CLIENT_NAME@PEER_CLUSTER_NAME -n CLIENT_NAME

    Example

    [root@rbd-client ~]# rbd --cluster site-b mirror pool peer add data client.site-a@site-a -n client.site-b

6.4.11. Removing a storage cluster peer

Remove a storage cluster peer by specifying the peer UUID.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. Specify the pool name and the peer Universally Unique Identifier (UUID).

    Syntax

    rbd mirror pool peer remove POOL_NAME PEER_UUID

    Example

    [root@rbd-client ~]# rbd mirror pool peer remove data 7e90b4ce-e36d-4f07-8cbc-42050896825d

    Tip

    To view the peer UUID, use the rbd mirror pool info command.

6.4.12. Getting mirroring status for a pool

You can get the mirror status for a pool on the storage clusters.

Prerequisites

  • A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
  • Root-level access to the node.

Procedure

  1. To get the mirroring pool summary:

    Syntax

    rbd mirror pool status POOL_NAME

    Example

    [root@site-a ~]# rbd mirror pool status data
    health: OK
    daemon health: OK
    image health: OK
    images: 1 total
        1 replaying

    Tip

    To output status details for every mirroring image in a pool, use the --verbose option.

6.4.13. Getting mirroring status for a single image

You can get the mirror status for an image by running the mirror image status command.

Prerequisites

  • A running Red Hat Ceph Storage cluster with snapshot-based mirroring configured.
  • Root-level access to the node.

Procedure

  1. To get the status of a mirrored image:

    Syntax

    rbd mirror image status POOL_NAME/IMAGE_NAME

    Example

    [root@site-a ~]# rbd mirror image status data/image2
    image2:
      global_id:   1e3422a2-433e-4316-9e43-1827f8dbe0ef
      state:       up+unknown
      description: remote image is non-primary
      service:     pluto008.yuoosv on pluto008
      last_update: 2021-10-06 09:37:58

    This example gets the status of the image2 image in the data pool.

6.4.14. Delaying block device replication

Whether you are using one- or two-way replication, you can delay replication between RADOS Block Device (RBD) mirroring images. You might want to implement delayed replication if you want a window of cushion time in case an unwanted change to the primary image needs to be reverted before being replicated to the secondary image.

Note

Delaying block device replication is only applicable with journal-based mirroring.

To implement delayed replication, the rbd-mirror daemon within the destination storage cluster should set the rbd_mirroring_replay_delay = MINIMUM_DELAY_IN_SECONDS configuration option. This setting can either be applied globally within the ceph.conf file utilized by the rbd-mirror daemons, or on an individual image basis.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. To utilize delayed replication for a specific image, on the primary image, run the following rbd CLI command:

    Syntax

    rbd image-meta set POOL_NAME/IMAGE_NAME conf_rbd_mirroring_replay_delay MINIMUM_DELAY_IN_SECONDS

    Example

    [root@rbd-client ~]# rbd image-meta set vms/vm-1 conf_rbd_mirroring_replay_delay 600

    This example sets a 10 minute minimum replication delay on image vm-1 in the vms pool.

6.4.15. Converting journal-based mirroring to snapshot-based mirrorring

You can convert journal-based mirroring to snapshot-based mirroring by disabling mirroring and enabling snapshot.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@rbd-client ~]# cephadm shell

  2. Disable mirroring for a specific image within the pool:

    Syntax

    rbd mirror image disable POOL_NAME/IMAGE_NAME

    Example

    [ceph: root@rbd-client /]# rbd mirror image disable mirror_pool/mirror_image
    Mirroring disabled

  3. Enable snapshot-based mirroring for the image:

    Syntax

    rbd mirror image enable POOL_NAME/IMAGE_NAME snapshot

    Example

    [ceph: root@rbd-client /]# rbd mirror image enable mirror_pool/mirror_image snapshot
    Mirroring enabled

    This example enables snapshot-based mirroring for the mirror_image image in the mirror_pool pool.

6.4.16. Creating an image mirror-snapshot

Create an image mirror-snapshot when it is required to mirror the changed contents of an RBD image when using snapshot-based mirroring.

Prerequisites

  • A minimum of two healthy running Red Hat Ceph Storage clusters.
  • Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
  • A CephX user with administrator-level capabilities.
  • Access to the Red Hat Ceph Storage cluster where a snapshot mirror will be created.
Important

By default, a maximum of 5 image mirror-snapshots are retained. The most recent image mirror-snapshot is automatically removed if the limit is reached. If required, the limit can be overridden through the rbd_mirroring_max_mirroring_snapshots configuration. Image mirror-snapshots are automatically deleted when the image is removed or when mirroring is disabled.

Procedure

  • To create an image-mirror snapshot:

    Syntax

    rbd --cluster CLUSTER_NAME mirror image snapshot POOL_NAME/IMAGE_NAME

    Example

    [root@site-a ~]# rbd mirror image snapshot data/image1

Additional Resources

6.4.17. Scheduling mirror-snapshots

Mirror-snapshots can be automatically created when mirror-snapshot schedules are defined. The mirror-snapshot can be scheduled globally, per-pool or per-image levels. Multiple mirror-snapshot schedules can be defined at any level but only the most specific snapshot schedules that match an individual mirrored image will run.

6.4.17.1. Creating a mirror-snapshot schedule

You can create a mirror-snapshot schedule using the snapshot schedule command.

Prerequisites

  • A minimum of two healthy running Red Hat Ceph Storage clusters.
  • Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
  • A CephX user with administrator-level capabilities.
  • Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.

Procedure

  1. To create a mirror-snapshot schedule:

    Syntax

    rbd --cluster CLUSTER_NAME mirror snapshot schedule add --pool POOL_NAME --image IMAGE_NAME INTERVAL [START_TIME]

    The CLUSTER_NAME should be used only when the cluster name is different from the default name ceph. The interval can be specified in days, hours, or minutes using d, h, or m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format.

    Example

    [root@site-a ~]# rbd mirror snapshot schedule add --pool data --image image1 6h

    Example

    [root@site-a ~]# rbd mirror snapshot schedule add --pool data --image image1 24h 14:00:00-05:00

Additional Resources

6.4.17.2. Listing all snapshot schedules at a specific level

You can list all snapshot schedules at a specific level.

Prerequisites

  • A minimum of two healthy running Red Hat Ceph Storage clusters.
  • Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
  • A CephX user with administrator-level capabilities.
  • Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.

Procedure

  1. To list all snapshot schedules for a specific global, pool or image level, with an optional pool or image name:

    Syntax

    rbd --cluster site-a mirror snapshot schedule ls --pool POOL_NAME --recursive

    Additionally, the --recursive option can be specified to list all schedules at the specified level as shown below:

    Example

    [root@rbd-client ~]# rbd mirror snapshot schedule ls --pool data --recursive
    POOL        NAMESPACE IMAGE  SCHEDULE
    data         -         -      every 1d starting at 14:00:00-05:00
    data         -        image1   every 6h

Additional Resources

6.4.17.3. Removing a mirror-snapshot schedule

You can remove a mirror-snapshot schedule using the snapshot schedule remove command.

Prerequisites

  • A minimum of two healthy running Red Hat Ceph Storage clusters.
  • Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
  • A CephX user with administrator-level capabilities.
  • Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.

Procedure

  1. To remove a mirror-snapshot schedule:

    Syntax

    rbd --cluster CLUSTER_NAME mirror snapshot schedule remove --pool POOL_NAME --image IMAGE_NAME INTERVAL START_TIME

    The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. The optional START_TIME can be specified using the ISO 8601 time format.

    Example

    [root@site-a ~]# rbd mirror snapshot schedule remove --pool data --image image1 6h

    Example

    [root@site-a ~]# rbd mirror snapshot schedule remove --pool data --image image1 24h 14:00:00-05:00

Additional Resources

6.4.17.4. Viewing the status for the next snapshots to be created

You can view the status for the next snapshots to be created for snapshot-based mirroring RBD images.

Prerequisites

  • A minimum of two healthy running Red Hat Ceph Storage clusters.
  • Root-level access to the Ceph client nodes for the Red Hat Ceph Storage clusters.
  • A CephX user with administrator-level capabilities.
  • Access to the Red Hat Ceph Storage cluster where the mirror image needs to be scheduled.

Procedure

  1. To view the status for the next snapshots to be created:

    Syntax

    rbd --cluster site-a mirror snapshot schedule status [--pool POOL_NAME] [--image IMAGE_NAME]

    Example

    [root@rbd-client ~]# rbd mirror snapshot schedule status
    SCHEDULE    TIME       IMAGE
    2021-09-21 18:00:00 data/image1

Additional Resources

6.5. Recover from a disaster

As a storage administrator, you can be prepared for eventual hardware failure by knowing how to recover the data from another storage cluster where mirroring was configured.

In the examples, the primary storage cluster is known as the site-a, and the secondary storage cluster is known as the site-b. Additionally, the storage clusters both have a data pool with two images, image1 and image2.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • One-way or two-way mirroring was configured.

6.5.1. Disaster recovery

Asynchronous replication of block data between two or more Red Hat Ceph Storage clusters reduces downtime and prevents data loss in the event of a significant data center failure. These failures have a widespread impact, also referred as a large blast radius, and can be caused by impacts to the power grid and natural disasters.

Customer data needs to be protected during these scenarios. Volumes must be replicated with consistency and efficiency and also within Recovery Point Objective (RPO) and Recovery Time Objective (RTO) targets. This solution is called a Wide Area Network- Disaster Recovery (WAN-DR).

In such scenarios it is hard to restore the primary system and the data center. The quickest way to recover is to failover the applications to an alternate Red Hat Ceph Storage cluster (disaster recovery site) and make the cluster operational with the latest copy of the data available. The solutions that are used to recover from these failure scenarios are guided by the application:

  • Recovery Point Objective (RPO): The amount of data loss, an application tolerate in the worst case.
  • Recovery Time Objective (RTO): The time taken to get the application back on line with the latest copy of the data available.

Additional Resources

  • See the Mirroring Ceph block devices Chapter in the Red Hat Ceph Storage Block Device Guide for details.
  • See the Encryption in transit section in the Red Hat Ceph Storage Data Security and Hardening Guide to know more about data transmission over the wire in an encrypted state.

6.5.2. Recover from a disaster with one-way mirroring

To recover from a disaster when using one-way mirroring use the following procedures. They show how to fail over to the secondary cluster after the primary cluster terminates, and how to fail back. The shutdown can be orderly or non-orderly.

Important

One-way mirroring supports multiple secondary sites. If you are using additional secondary clusters, choose one of the secondary clusters to fail over to. Synchronize from the same cluster during fail back.

6.5.3. Recover from a disaster with two-way mirroring

To recover from a disaster when using two-way mirroring use the following procedures. They show how to fail over to the mirrored data on the secondary cluster after the primary cluster terminates, and how to failback. The shutdown can be orderly or non-orderly.

6.5.4. Failover after an orderly shutdown

Failover to the secondary storage cluster after an orderly shutdown.

Prerequisites

  • Minimum of two running Red Hat Ceph Storage clusters.
  • Root-level access to the node.
  • Pool mirroring or image mirroring configured with one-way mirroring.

Procedure

  1. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image.
  2. Demote the primary images located on the site-a cluster by running the following commands on a monitor node in the site-a cluster:

    Syntax

    rbd mirror image demote POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image demote data/image1
    [root@rbd-client ~]# rbd mirror image demote data/image2

  3. Promote the non-primary images located on the site-b cluster by running the following commands on a monitor node in the site-b cluster:

    Syntax

    rbd mirror image promote POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image promote data/image1
    [root@rbd-client ~]# rbd mirror image promote data/image2

  4. After some time, check the status of the images from a monitor node in the site-b cluster. They should show a state of up+stopped and be listed as primary:

    [root@rbd-client ~]# rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-17 16:04:37
    [root@rbd-client ~]# rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-17 16:04:37
  5. Resume the access to the images. This step depends on which clients use the image.

Additional Resources

6.5.5. Failover after a non-orderly shutdown

Failover to secondary storage cluster after a non-orderly shutdown.

Prerequisites

  • Minimum of two running Red Hat Ceph Storage clusters.
  • Root-level access to the node.
  • Pool mirroring or image mirroring configured with one-way mirroring.

Procedure

  1. Verify that the primary storage cluster is down.
  2. Stop all clients that use the primary image. This step depends on which clients use the image. For example, detach volumes from any OpenStack instances that use the image.
  3. Promote the non-primary images from a Ceph Monitor node in the site-b storage cluster. Use the --force option, because the demotion cannot be propagated to the site-a storage cluster:

    Syntax

    rbd mirror image promote --force POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image promote --force data/image1
    [root@rbd-client ~]# rbd mirror image promote --force data/image2

  4. Check the status of the images from a Ceph Monitor node in the site-b storage cluster. They should show a state of up+stopping_replay. The description should say force promoted, meaning it is in the intermittent state. Wait until the state comes to up+stopped to validate the site is successfully promoted.

    Example

    [root@rbd-client ~]# rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopping_replay
      description: force promoted
      last_update: 2023-04-17 13:25:06
    
    [root@rbd-client ~]# rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: force promoted
      last_update: 2023-04-17 13:25:06

Additional Resources

6.5.6. Prepare for fail back

If two storage clusters were originally configured only for one-way mirroring, in order to fail back, configure the primary storage cluster for mirroring in order to replicate the images in the opposite direction.

During failback scenario, the existing peer that is inaccessible must be removed before adding a new peer to an existing cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the client node.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@rbd-client ~]# cephadm shell

  2. On the site-a storage cluster , run the following command:

    Example

    [ceph: root@rbd-client /]# ceph orch apply rbd-mirror --placement=host01

  3. Remove any inaccessible peers.

    Important

    This step must be run on the peer site which is up and running.

    Note

    Multiple peers are supported only for one way mirroring.

    1. Get the pool UUID:

      Syntax

      rbd mirror pool info POOL_NAME

      Example

      [ceph: root@host01 /]# rbd mirror pool info pool_failback

    2. Remove the inaccessible peer:

      Syntax

      rbd mirror pool peer remove POOL_NAME PEER_UUID

      Example

      [ceph: root@host01 /]# rbd mirror pool peer remove pool_failback f055bb88-6253-4041-923d-08c4ecbe799a

  4. Create a block device pool with a name same as its peer mirror pool.

    1. To create an rbd pool, execute the following:

      Syntax

      ceph osd pool create POOL_NAME PG_NUM
      ceph osd pool application enable POOL_NAME rbd
      rbd pool init -p POOL_NAME

      Example

      [root@rbd-client ~]# ceph osd pool create pool1
      [root@rbd-client ~]# ceph osd pool application enable pool1 rbd
      [root@rbd-client ~]# rbd pool init -p pool1

  5. On a Ceph client node, bootstrap the storage cluster peers.

    1. Create Ceph user accounts, and register the storage cluster peer to the pool:

      Syntax

      rbd mirror pool peer bootstrap create --site-name LOCAL_SITE_NAME POOL_NAME > PATH_TO_BOOTSTRAP_TOKEN

      Example

      [ceph: root@rbd-client-site-a /]# rbd mirror pool peer bootstrap create --site-name site-a data > /root/bootstrap_token_site-a

      Note

      This example bootstrap command creates the client.rbd-mirror.site-a and the client.rbd-mirror-peer Ceph users.

    2. Copy the bootstrap token file to the site-b storage cluster.
    3. Import the bootstrap token on the site-b storage cluster:

      Syntax

      rbd mirror pool peer bootstrap import --site-name LOCAL_SITE_NAME --direction rx-only POOL_NAME PATH_TO_BOOTSTRAP_TOKEN

      Example

      [ceph: root@rbd-client-site-b /]# rbd mirror pool peer bootstrap import --site-name site-b --direction rx-only data /root/bootstrap_token_site-a

      Note

      For one-way RBD mirroring, you must use the --direction rx-only argument, as two-way mirroring is the default when bootstrapping peers.

  6. From a monitor node in the site-a storage cluster, verify the site-b storage cluster was successfully added as a peer:

    Example

    [ceph: root@rbd-client /]# rbd mirror pool info -p data
    Mode: image
    Peers:
      UUID                                 NAME   CLIENT
      d2ae0594-a43b-4c67-a167-a36c646e8643 site-b client.site-b

Additional Resources

  • For detailed information, see the User Management chapter in the Red Hat Ceph Storage Administration Guide.

6.5.6.1. Fail back to the primary storage cluster

When the formerly primary storage cluster recovers, fail back to the primary storage cluster.

Note

If you have scheduled snapshots at the image level, then you need to re-add the schedule as image resync operations changes the RBD Image ID and the previous schedule becomes obsolete.

Prerequisites

  • Minimum of two running Red Hat Ceph Storage clusters.
  • Root-level access to the node.
  • Pool mirroring or image mirroring configured with one-way mirroring.

Procedure

  1. Check the status of the images from a monitor node in the site-b cluster again. They should show a state of up-stopped and the description should say local image is primary:

    Example

    [root@rbd-client ~]# rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 17:37:48
    [root@rbd-client ~]# rbd mirror image status data/image2
    image2:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 17:38:18

  2. From a Ceph Monitor node on the site-a storage cluster determine if the images are still primary:

    Syntax

    rbd mirror pool info POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd info data/image1
    [root@rbd-client ~]# rbd info data/image2

    In the output from the commands, look for mirroring primary: true or mirroring primary: false, to determine the state.

  3. Demote any images that are listed as primary by running a command like the following from a Ceph Monitor node in the site-a storage cluster:

    Syntax

    rbd mirror image demote POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image demote data/image1

  4. Resynchronize the images ONLY if there was a non-orderly shutdown. Run the following commands on a monitor node in the site-a storage cluster to resynchronize the images from site-b to site-a:

    Syntax

    rbd mirror image resync POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image resync data/image1
    Flagged image for resync from primary
    [root@rbd-client ~]# rbd mirror image resync data/image2
    Flagged image for resync from primary

  5. After some time, ensure resynchronization of the images is complete by verifying they are in the up+replaying state. Check their state by running the following commands on a monitor node in the site-a storage cluster:

    Syntax

    rbd mirror image status POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image status data/image1
    [root@rbd-client ~]# rbd mirror image status data/image2

  6. Demote the images on the site-b storage cluster by running the following commands on a Ceph Monitor node in the site-b storage cluster:

    Syntax

    rbd mirror image demote POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image demote data/image1
    [root@rbd-client ~]# rbd mirror image demote data/image2

    Note

    If there are multiple secondary storage clusters, this only needs to be done from the secondary storage cluster where it was promoted.

  7. Promote the formerly primary images located on the site-a storage cluster by running the following commands on a Ceph Monitor node in the site-a storage cluster:

    Syntax

    rbd mirror image promote POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image promote data/image1
    [root@rbd-client ~]# rbd mirror image promote data/image2

  8. Check the status of the images from a Ceph Monitor node in the site-a storage cluster. They should show a status of up+stopped and the description should say local image is primary:

    Syntax

    rbd mirror image status POOL_NAME/IMAGE_NAME

    Example

    [root@rbd-client ~]# rbd mirror image status data/image1
    image1:
      global_id:   08027096-d267-47f8-b52e-59de1353a034
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 11:14:51
    [root@rbd-client ~]# rbd mirror image status data/image2
    image2:
      global_id:   596f41bc-874b-4cd4-aefe-4929578cc834
      state:       up+stopped
      description: local image is primary
      last_update: 2019-04-22 11:14:51

6.5.7. Remove two-way mirroring

After fail back is complete, you can remove two-way mirroring and disable the Ceph block device mirroring service.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

Procedure

  1. Remove the site-b storage cluster as a peer from the site-a storage cluster:

    Example

    [root@rbd-client ~]# rbd mirror pool peer remove data client.remote@remote --cluster local
    [root@rbd-client ~]# rbd --cluster site-a mirror pool peer remove data client.site-b@site-b -n client.site-a

  2. Stop and disable the rbd-mirror daemon on the site-a client:

    Syntax

    systemctl stop ceph-rbd-mirror@CLIENT_ID
    systemctl disable ceph-rbd-mirror@CLIENT_ID
    systemctl disable ceph-rbd-mirror.target

    Example

    [root@rbd-client ~]# systemctl stop ceph-rbd-mirror@site-a
    [root@rbd-client ~]# systemctl disable ceph-rbd-mirror@site-a
    [root@rbd-client ~]# systemctl disable ceph-rbd-mirror.target

Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.