Questo contenuto non è disponibile nella lingua selezionata.
Chapter 11. Ceph File System snapshot mirroring
As a storage administrator, you can replicate a Ceph File System (CephFS) to a remote Ceph File System on another Red Hat Ceph Storage cluster.
Prerequisites
- The source and the target storage clusters must be running Red Hat Ceph Storage 6.0 or later.
The Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote CephFS on another Red Hat Ceph Storage cluster. Snapshot synchronization copies snapshot data to a remote Ceph File System, and creates a new snapshot on the remote target with the same name. You can configure specific directories for snapshot synchronization.
Management of CephFS mirrors is done by the CephFS mirroring daemon (cephfs-mirror
). This snapshot data is synchronized by doing a bulk copy to the remote CephFS. The chosen order of synchronizing snapshot pairs is based on the creation using the snap-id
.
Synchronizing hard links is not supported. Hard linked files get synchronized as regular files.
The CephFS snapshot mirroring includes features, for example snapshot incarnation or high availability. These can be managed through Ceph Manager mirroring
module, which is the recommended control interface.
Ceph Manager Module and interfaces
The Ceph Manager mirroring
module is disabled by default. It provides interfaces for managing mirroring of directory snapshots. Ceph Manager interfaces are mostly wrappers around monitor commands for managing CephFS mirroring. They are the recommended control interface.
The Ceph Manager mirroring
module is implemented as a Ceph Manager plugin. It is responsible for assigning directories to the cephfs-mirror
daemons for synchronization.
The Ceph Manager mirroring
module also provides a family of commands to control mirroring of directory snapshots. The mirroring
module does not manage the cephfs-mirror
daemons. The stopping, starting, restarting, and enabling of the cephfs-mirror
daemons is controlled by systemctl
, but managed by cephadm
.
Mirroring module commands use the fs snapshot mirror
prefix as compared to the monitor commands with the fs mirror
prefix. Assure that you are using the module command prefix to control the mirroring of directory snapshots.
Snapshot incarnation
A snapshot might be deleted and recreated with the same name and different content. The user could synchronize an "old" snapshot earlier and recreate the snapshot when the mirroring was disabled. Using snapshot names to infer the point-of-continuation would result in the “new” snapshot, an incarnation, never getting picked up for synchronization.
Snapshots on the secondary file system store the snap-id
of the snapshot it was synchronized from. This metadata is stored in the SnapInfo
structure on the Ceph Metadata Server.
High availability
You can deploy multiple cephfs-mirror
daemons on two or more nodes to achieve concurrency in synchronization of directory snapshots. When cephfs-mirror
daemons are deployed or terminated, the Ceph Manager mirroring
module discovers the modified set of cephfs-mirror
daemons and rebalances the directory assignment amongst the new set thus providing high availability.
cephfs-mirror
daemons share the synchronization load using a simple M/N policy, where M is the number of directories and N is the number of cephfs-mirror
daemons.
Re-addition of Ceph File System mirror peers
When re-adding or reassigning a peer to a CephFS in another cluster, ensure that all mirror daemons have stopped synchronization to the peer. You can verify this with the fs mirror status
command. The Peer UUID should not show up in the command output.
Purge synchronized directories from the peer before re-adding it to another CephFS, especially those directories which might exist in the new primary file system. This is not required if you are re-adding a peer to the same primary file system it was earlier synchronized from.
Additional Resources
-
See Viewing the mirror status for a Ceph File System for more details on the
fs mirror status
command.
11.1. Configuring a snapshot mirror for a Ceph File System
You can configure a Ceph File System (CephFS) for mirroring to replicate snapshots to another CephFS on a remote Red Hat Ceph Storage cluster.
The time taken for synchronizing to a remote storage cluster depends on the file size and the total number of files in the mirroring path.
Prerequisites
- The source and the target storage clusters must be healthy and running Red Hat Ceph Storage 8.0 or later.
- Root-level access to a Ceph Monitor node in the source and the target storage clusters.
- At least one Ceph File System deployed on your storage cluster.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
On the source storage cluster, deploy the CephFS mirroring daemon:
Syntax
ceph orch apply cephfs-mirror ["NODE_NAME"]
Example
[ceph: root@host01 /]# ceph orch apply cephfs-mirror "node1.example.com" Scheduled cephfs-mirror update...
This command creates a Ceph user called,
cephfs-mirror
, and deploys thecephfs-mirror
daemon on the given node.Optional: Deploy multiple CephFS mirroring daemons and achieve high availability:
Syntax
ceph orch apply cephfs-mirror --placement="PLACEMENT_SPECIFICATION"
Example
[ceph: root@host01 /]# ceph orch apply cephfs-mirror --placement="3 host1 host2 host3" Scheduled cephfs-mirror update...
This example deploys three
cephfs-mirror
daemons on different hosts.WarningDo not separate the hosts with commas as it results in the following error:
Error EINVAL: name component must include only a-z, 0-9, and -
On the target storage cluster, create a user for each CephFS peer:
Syntax
ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps
Example
[ceph: root@host01 /]# ceph fs authorize cephfs client.mirror_remote / rwps [client.mirror_remote] key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==
On the source storage cluster, enable the CephFS mirroring module:
Example
[ceph: root@host01 /]# ceph mgr module enable mirroring
On the source storage cluster, enable mirroring on a Ceph File System:
Syntax
ceph fs snapshot mirror enable FILE_SYSTEM_NAME
Example
[ceph: root@host01 /]# ceph fs snapshot mirror enable cephfs
Optional: Disable snapshot mirroring:
Syntax
ceph fs snapshot mirror disable FILE_SYSTEM_NAME
Example
[ceph: root@host01 /]# ceph fs snapshot mirror disable cephfs
WarningDisabling snapshot mirroring on a file system removes the configured peers. You have to import the peers again by bootstrapping them.
Prepare the target peer storage cluster.
On a target node, enable the
mirroring
Ceph Manager module:Example
[ceph: root@host01 /]# ceph mgr module enable mirroring
On the same target node, create the peer bootstrap:
Syntax
ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME
The SITE_NAME is a user-defined string to identify the target storage cluster.
Example
[ceph: root@host01 /]# ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site {"token": "eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ=="}
Copy the token string between the double quotes for use in the next step.
On the source storage cluster, import the bootstrap token from the target storage cluster:
Syntax
ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN
Example
[ceph: root@host01 /]# ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==
On the source storage cluster, list the CephFS mirror peers:
Syntax
ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME
Example
[ceph: root@host01 /]# ceph fs snapshot mirror peer_list cephfs {"e5ecb883-097d-492d-b026-a585d1d7da79": {"client_name": "client.mirror_remote", "site_name": "remote-site", "fs_name": "cephfs", "mon_host": "[v2:10.0.211.54:3300/0,v1:10.0.211.54:6789/0] [v2:10.0.210.56:3300/0,v1:10.0.210.56:6789/0] [v2:10.0.210.65:3300/0,v1:10.0.210.65:6789/0]"}}
Optional: Remove a snapshot peer:
Syntax
ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID
Example
[ceph: root@host01 /]# ceph fs snapshot mirror peer_remove cephfs e5ecb883-097d-492d-b026-a585d1d7da79
NoteSee Viewing the mirror status for a Ceph File System on how to find the peer UUID value.
On the source storage cluster, configure a directory for snapshot mirroring:
Syntax
ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH
Example
[ceph: root@host01 /]# ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1
ImportantOnly absolute paths inside the Ceph File System are valid.
NoteThe Ceph Manager
mirroring
module normalizes the path. For example, the/d1/d2/../dN
directories are equivalent to/d1/d2
. Once a directory has been added for mirroring, its ancestor directories and subdirectories are prevented from being added for mirroring.Optional: Stop snapshot mirroring for a directory:
Syntax
ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH
Example
[ceph: root@host01 /]# ceph fs snapshot mirror remove cephfs /home/user1
Additional Resources
- See the Viewing the mirror status for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more information.
- See the Ceph File System mirroring section in the Red Hat Ceph Storage File System Guide for more information.
11.2. Viewing the mirror status for a Ceph File System
The Ceph File System (CephFS) mirror daemon (cephfs-mirror
) gets asynchronous notifications about changes in the CephFS mirroring status, along with peer updates. The CephFS mirroring module provides a mirror daemon status interface to check mirror daemon status. For more detailed information, you can query the cephfs-mirror
admin socket with commands to retrieve the mirror status and peer status.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- At least one deployment of a Ceph File System with mirroring enabled.
- Root-level access to the node running the CephFS mirroring daemon.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Check the
cephfs-mirror
daemon status:Syntax
ceph fs snapshot mirror daemon status
Example
[ceph: root@host01 /]# ceph fs snapshot mirror daemon status [ { "daemon_id": 15594, "filesystems": [ { "filesystem_id": 1, "name": "cephfs", "directory_count": 1, "peers": [ { "uuid": "e5ecb883-097d-492d-b026-a585d1d7da79", "remote": { "client_name": "client.mirror_remote", "cluster_name": "remote-site", "fs_name": "cephfs" }, "stats": { "failure_count": 1, "recovery_count": 0 } } ] } ] } ]
For more detailed information, use the admin socket interface as detailed below.
Find the Ceph File System ID on the node running the CephFS mirroring daemon:
Syntax
ceph --admin-daemon PATH_TO_THE_ASOK_FILE help
Example
[ceph: root@host01 /]# ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok help { ... "fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e": "get peer mirror status", "fs mirror status cephfs@11": "get filesystem mirror status", ... }
The Ceph File System ID in this example is
cephfs@11
.NoteWhen mirroring is disabled, the respective
fs mirror status
command for the file system does not show up in thehelp
command.View the mirror status:
Syntax
ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME@_FILE_SYSTEM_ID
Example
[ceph: root@host01 /]# ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok fs mirror status cephfs@11 { "rados_inst": "192.168.0.5:0/1476644347", "peers": { "1011435c-9e30-4db6-b720-5bf482006e0e": { 1 "remote": { "client_name": "client.mirror_remote", "cluster_name": "remote-site", "fs_name": "cephfs" } } }, "snap_dirs": { "dir_count": 1 } }
- 1
- This is the unique peer UUID.
View the peer status:
Syntax
ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME@FILE_SYSTEM_ID PEER_UUID
Example
[ceph: root@host01 /]# ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e { "/home/user1": { "state": "idle", 1 "last_synced_snap": { "id": 120, "name": "snap1", "sync_duration": 0.079997898999999997, "sync_time_stamp": "274900.558797s" }, "snaps_synced": 2, 2 "snaps_deleted": 0, 3 "snaps_renamed": 0 } }
The
state
can be one of these three values:
The default number of consecutive failures is 10, and the default retry interval is 60 seconds.
Display the directory to which
cephfs-mirror
daemon is mapped:Syntax
ceph fs snapshot mirror dirmap FILE_SYSTEM_NAME PATH
Example
[ceph: root@host01 /]# ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { "instance_id": "25184", 1 "last_shuffled": 1661162007.012663, "state": "mapped" }
- 1
instance_id
is the RADOS instance-ID associated with acephfs-mirror
daemon.
Example
[ceph: root@host01 /]# ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { "reason": "no mirror daemons running", "state": "stalled" 1 }
- 1
stalled
state means the CephFS mirroring is stalled.
The second example shows the command output when no mirror daemons are running.
Additional Resources
- See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more information.
11.3. Viewing metrics for Ceph File System snapshot mirroring
Viewing these metrics helps in monitoring the performance and the sync progress. Check Ceph File System snapshot mirror health and volume metrics by using the counter dump.
Prerequistes
- A running IBM Storage Ceph cluster.
- A minimum of one deployment of a Ceph File System snapshot mirroring enabled.
- Root-level access to the node running the Ceph File System mirroring daemon.
Procedure
-
Get the name of the
asok
file. Theasok
file is available where the mirroring daemon is running and is located at/var/run/ceph/
within the cephadm shell. Check the mirroring metrics and synchronization status by running the following command on the node running the CephFS mirroring daemon.
Syntax
[ceph: root@mirror-host01 /]# ceph --admin-daemon ASOK_FILE_NAME counter dump
Example
[ceph: root@mirror-host01 /]# ceph --admin-daemon ceph-client.cephfs-mirror.ceph1-hk-n-0mfqao-node7.pnbrlu.2.93909288073464.asok counter dump [ { "key": "cephfs_mirror", "value": [ { "labels": {}, "counters": { "mirrored_filesystems": 1, "mirror_enable_failures": 0 } } ] }, { "key": "cephfs_mirror_mirrored_filesystems", "value": [ { "labels": { "filesystem": "cephfs" }, "counters": { "mirroring_peers": 1, "directory_count": 1 } } ] }, { "key": "cephfs_mirror_peers", "value": [ { "labels": { "peer_cluster_filesystem": "cephfs", "peer_cluster_name": "remote_site", "source_filesystem": "cephfs", "source_fscid": "1" }, "counters": { "snaps_synced": 1, "snaps_deleted": 0, "snaps_renamed": 0, "sync_failures": 0, "avg_sync_time": { "avgcount": 1, "sum": 4.216959457, "avgtime": 4.216959457 }, "sync_bytes": 132 } } ] } ]
Metrics description:
Labeled Perf Counters
generate metrics which can be consumed by the OCP/ODF dashboard to provide monitoring of geo-replication in the OCP and ACM dashboard and elsewhere.
This would generate the progress of cephfs_mirror syncing and provide monitoring capability. The exported metrics enable monitoring based on the following alerts.
- mirroring_peers
- The number of peers involved in mirroring.
- directory_count
- The total number of directories being synchronized.
- mirrored_filesystems
- The total number of file systems which are mirrored.
- mirror_enable_failures
- Enable mirroring failures.
- snaps_synced
- The total number of snapshots successfully synchronized.
- sync_bytes
- The total bytes being synchronized
- sync_failures
- The total number of failed snapshot synchronizations.
- snaps_deleted
- The total number of snapshots deleted.
- snaps_renamed
- The total number of snapshots renamed.
- avg_synced_time
- The average time taken by all snapshot synchronizations.
- last_synced_start
- The sync start time of the last synced snapshot.
- last_synced_end
- The sync end time of the last synced snapshot.
- last_synced_duration
- The time duration of the last synchronization.
- last_synced_bytes
- The total bytes being synchronized for the last synced snapshot.
Additional Resources
- For details, see the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide.
- For details, see the Red Hat Ceph Storage Installation Guide.
- For details, see the The Ceph File System Metadata Server section in the Red Hat Ceph Storage File System Guide.
- For details, see the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide.