Chapter 11. Ceph File System mirrors
As a storage administrator, you can replicate a Ceph File System (CephFS) to a remote Ceph File System on another Red Hat Ceph Storage cluster. A Ceph File System supports asynchronous replication of snapshots directories.
11.1. Prerequisites Copy linkLink copied to clipboard!
- The source and the target storage clusters must be running Red Hat Ceph Storage 5.0 or later.
11.2. Ceph File System mirroring Copy linkLink copied to clipboard!
The Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote Ceph File System on another Red Hat Ceph Storage cluster. Snapshot synchronization copies snapshot data to a remote Ceph File System, and creates a new snapshot on the remote target with the same name. You can configure specific directories for snapshot synchronization.
Management of CephFS mirrors is done by the CephFS mirroring daemon (cephfs-mirror
). This snapshot data is synchronized by doing a bulk copy to the remote CephFS. The chosen order of synchronizing snapshot pairs is based on the creation using the snap-id
.
Hard linked files get synchronized as separate files.
Red Hat supports running only one cephfs-mirror
daemon per storage cluster.
Ceph Manager Module
The Ceph Manager mirroring
module is disabled by default. It provides interfaces for managing directory snapshot mirroring, and is responsible for assigning directories to the cephfs-mirror
daemon for synchronization. The Ceph Manager mirroring
module also provides a family of commands to control mirroring of directory snapshots. The mirroring
module does not manage the cephfs-mirror
daemon. The stopping, starting, restarting, and enabling of the cephfs-mirror
daemon is controlled by systemctl
, but managed by cephadm
.
11.3. Configuring a snapshot mirror for a Ceph File System Copy linkLink copied to clipboard!
You can configure a Ceph File System (CephFS) for mirroring to replicate snapshots to another CephFS on a remote Red Hat Ceph Storage cluster.
The time taken for synchronizing to remote storage cluster depends on the file size and the total number of files in the mirroring path.
Prerequisites
- The source and the target storage clusters must be healthy and running Red Hat Ceph Storage 5.0 or later.
- Root-level access to a Ceph Monitor node in the source and the target storage clusters.
- At least one deployment of a Ceph File System.
Procedure
On the source storage cluster, deploy the CephFS mirroring daemon:
Syntax
ceph orch apply cephfs-mirror ["NODE_NAME"]
ceph orch apply cephfs-mirror ["NODE_NAME"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch apply cephfs-mirror "node1.example.com"
[root@mon ~]# ceph orch apply cephfs-mirror "node1.example.com" Scheduled cephfs-mirror update...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command creates a Ceph user called,
cephfs-mirror
, and deploys thecephfs-mirror
daemon on the given node.On the target storage cluster, create a user for each CephFS peer:
Syntax
ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps
ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs authorize cephfs client.mirror_remote / rwps
[root@mon ~]# ceph fs authorize cephfs client.mirror_remote / rwps [client.mirror_remote] key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the source storage cluster, enable the CephFS mirroring module:
Example
ceph mgr module enable mirroring
[root@mon ~]# ceph mgr module enable mirroring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the source storage cluster, enable mirroring on a Ceph File System:
Syntax
ceph fs snapshot mirror enable FILE_SYSTEM_NAME
ceph fs snapshot mirror enable FILE_SYSTEM_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs snapshot mirror enable cephfs
[root@mon ~]# ceph fs snapshot mirror enable cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional. To disable snapshot mirroring, use the following command:
Syntax
ceph fs snapshot mirror disable FILE_SYSTEM_NAME
ceph fs snapshot mirror disable FILE_SYSTEM_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs snapshot mirror disable cephfs
[root@mon ~]# ceph fs snapshot mirror disable cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningDisabling snapshot mirroring on a file system removes the configured peers. You have to import the peers again by bootstrapping them.
Prepare the target peer storage cluster.
On a target node, enable the
mirroring
Ceph Manager module:Example
ceph mgr module enable mirroring
[root@mon ~]# ceph mgr module enable mirroring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the same target node, create the peer bootstrap:
Syntax
ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME
ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The SITE_NAME is a user-defined string to identify the target storage cluster.
Example
ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site
[root@mon ~]# ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site {"token": "eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ=="}
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the token string between the double quotes for use in the next step.
On the source storage cluster, import the bootstrap token from the target storage cluster:
Syntax
ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN
ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==
[root@mon ~]# ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==
Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the source storage cluster, list the CephFS mirror peers:
Syntax
ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME
ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs snapshot mirror peer_list cephfs
[root@mon ~]# ceph fs snapshot mirror peer_list cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional. To remove a snapshot peer, use the following command:
Syntax
ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID
ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs snapshot mirror peer_remove cephfs a2dc7784-e7a1-4723-b103-03ee8d8768f8
[root@mon ~]# ceph fs snapshot mirror peer_remove cephfs a2dc7784-e7a1-4723-b103-03ee8d8768f8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteSee the Viewing the mirror status for a Ceph File System link in the Additional Resources section of this procedure on how to find the peer UUID value.
On the source storage cluster, configure a directory for snapshot mirroring:
Syntax
ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH
ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1
[root@mon ~]# ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantOnly absolute paths inside the Ceph File System are valid.
NoteThe Ceph Manager
mirroring
module normalizes the path. For example, the/d1/d2/../dN
directories are equivalent to/d1/d2
. Once a directory has been added for mirroring, its ancestor directories and subdirectories are prevented from being added for mirroring.Optional. To stop snapshot mirroring for a directory, use the following command:
Syntax
ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH
ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs snapshot mirror remove cephfs /home/user1
[root@mon ~]# ceph fs snapshot mirror remove cephfs /home/user1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Viewing the mirror status for a Ceph File System Copy linkLink copied to clipboard!
The Ceph File System (CephFS) mirror daemon (cephfs-mirror
) gets asynchronous notifications about changes in the CephFS mirroring status, along with peer updates. You can query the cephfs-mirror
admin socket with commands to retrieve the mirror status and peer status.
Prerequisites
- At least one deployment of a Ceph File System with mirroring enabled.
- Root-level access to the node running the CephFS mirroring daemon.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Find the Ceph File System ID on the node running the CephFS mirroring daemon:
Syntax
ceph --admin-daemon PATH_TO_THE_ASOK_FILE help
ceph --admin-daemon PATH_TO_THE_ASOK_FILE help
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Ceph File System ID in this example is
cephfs@11
.To view the mirror status:
Syntax
ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME@_FILE_SYSTEM_ID
ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME@_FILE_SYSTEM_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the unique peer UUID.
To view the peer status:
Syntax
ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME@FILE_SYSTEM_ID PEER_UUID
ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME@FILE_SYSTEM_ID PEER_UUID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
state
can be one of these three values:
The default number of consecutive failures is 10, and the default retry interval is 60 seconds.
The synchronization stats: snaps_synced
, snaps_deleted
, and snaps_renamed
are reset when the cephfs-mirror
daemon restarts.