Chapter 10. Ceph File System snapshot scheduling


As a storage administrator, you can take a point-in-time snapshot of a Ceph File System (CephFS) directory. CephFS snapshots are asynchronous, and you can choose which directory snapshots are created in.

10.1. Prerequisites

  • A running, and healthy Red Hat Ceph Storage cluster.
  • Deployment of a Ceph File System.

10.2. Ceph File System snapshot schedules

A Ceph File System (CephFS) can schedule snapshots of a file system directory. The scheduling of snapshots is managed by the Ceph Manager, and relies on Python Timers. The snapshot schedule data is stored as an object in the CephFS metadata pool, and at runtime, all the schedule data lives in a serialized SQLite database.

Important

The scheduler is precisely based on the specified time to keep snapshots apart when a storage cluster is under normal load. When the Ceph Manager is under a heavy load, it’s possible that a snapshot might not get scheduled right away, resulting in a slightly delayed snapshot. If this happens, then the next scheduled snapshot acts as if there was no delay. Scheduled snapshots that are delayed do not cause drift in the overall schedule.

Usage

Scheduling snapshots for a Ceph File System (CephFS) is managed by the snap_schedule Ceph Manager module. This module provides an interface to add, query, and delete snapshot schedules, and to manage the retention policies. This module also implements the ceph fs snap-schedule command, with several subcommands to manage schedules, and retention policies. All of the subcommands take the CephFS volume path and subvolume path arguments to specify the file system path when using multiple Ceph File Systems. Not specifying the CephFS volume path, the argument defaults to the first file system listed in the fs_map, and not specifying the subvolume path argument defaults to nothing.

Snapshot schedules are identified by the file system path, the repeat interval, and the start time. The repeat interval defines the time between two subsequent snapshots. The interval format is a number plus a time designator: h(our), d(ay), or w(eek). For example, having an interval of 4h, means one snapshot every four hours. The start time is a string value in the ISO format, %Y-%m-%dT%H:%M:%S, and if not specified, the start time uses a default value of last midnight. For example, you schedule a snapshot at 14:45, using the default start time value, with a repeat interval of 1h, the first snapshot will be taken at 15:00.

Retention policies are identified by the file system path, and the retention policy specifications. Defining a retention policy consist of either a number plus a time designator or a concatenated pairs in the format of COUNT TIME_PERIOD. The policy ensures a number of snapshots are kept, and the snapshots are at least for a specified time period apart. The time period designators are: h(our), d(ay), w(eek), m(onth), y(ear), and n. The n time period designator is a special modifier, which means keep the last number of snapshots regardless of timing. For example, 4d means keeping four snapshots that are at least one day, or longer apart from each other.

Additional Resources

10.3. Adding a snapshot schedule for a Ceph File System

Add a snapshot schedule for a CephFS path that does not exist yet. You can create one or more schedules for a single path. Schedules are considered different, if their repeat interval and start times are different.

A CephFS path can only have one retention policy, but a retention policy can have multiple count-time period pairs.

Note

Once the scheduler module is enabled, running the ceph fs snap-schedule command displays the available subcommands and their usage format.

Prerequisites

  • A running, and healthy Red Hat Ceph Storage cluster.
  • Deployment of a Ceph File System.
  • Root-level access to a Ceph Manager and Metadata Server (MDS) nodes.
  • Enable CephFS snapshots on the file system.

Procedure

  1. Log into the Cephadm shell on a Ceph Manager node:

    Example

    [root@host01 ~]# cephadm shell

  2. Enable the snap_schedule module:

    Example

    [ceph: root@host01 /]# ceph mgr module enable snap_schedule

  3. Log into the client node:

    Example

    [root@host02 ~]# cephadm shell

  4. Add a new schedule for a Ceph File System:

    Syntax

    ceph fs snap-schedule add FILE_SYSTEM_VOLUME_PATH REPEAT_INTERVAL [START_TIME]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00

    Note

    START_TIME is represented in ISO 8601 format.

    This example creates a snapshot schedule for the path /cephfs within the filesystem mycephfs, snapshotting every hour, and starts on 27 June 2022 9:50 PM.

  5. Add a new retention policy for snapshots of a CephFS volume path:

    Syntax

    ceph fs snap-schedule retention add FILE_SYSTEM_VOLUME_PATH [COUNT_TIME_PERIOD_PAIR] TIME_PERIOD COUNT

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule retention add /cephfs h 14 1
    [ceph: root@host02 /]# ceph fs snap-schedule retention add /cephfs d 4 2
    [ceph: root@host02 /]# ceph fs snap-schedule retention add /cephfs 14h4w 3

    1
    This example keeps 14 snapshots at least one hour apart.
    2
    This example keeps 4 snapshots at least one day apart.
    3
    This example keeps 14 hourly, and 4 weekly snapshots.
  6. List the snapshot schedules to verify the new schedule is created.

    Syntax

    ceph fs snap-schedule list FILE_SYSTEM_VOLUME_PATH [--format=plain|json] [--recursive=true]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule list /cephfs --recursive=true

    This example lists all schedules in the directory tree.

  7. Check the status of a snapshot schedule:

    Syntax

    ceph fs snap-schedule status FILE_SYSTEM_VOLUME_PATH [--format=plain|json]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule status /cephfs --format=json

    This example displays the status of the snapshot schedule for the CephFS /cephfs path in JSON format. The default format is plain text, if not specified.

Additional Resources

10.4. Adding a snapshot schedule for Ceph File System subvolume

To manage the retention policies for Ceph File System (CephFS) subvolume snapshots, you can have different schedules for a single path.

Schedules are considered different, if their repeat interval and start times are different.

Add a snapshot schedule for a CephFS file path that does not exist yet. A CephFS path can only have one retention policy, but a retention policy can have multiple count-time period pairs.

Note

Once the scheduler module is enabled, running the ceph fs snap-schedule command displays the available subcommands and their usage format.

Important

Currently, only subvolumes that belong to the default subvolume group can be scheduled for snapshotting.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume and subvolume group created.

Procedure

  1. Get the subvolume path:

    Syntax

    ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME SUBVOLUME_GROUP_NAME

    Example

    [ceph: root@host02 /]# ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1

  2. Add a new schedule for a Ceph File System subvolume path:

    Syntax

    ceph fs snap-schedule add /.. SNAP_SCHEDULE [START_TIME] --fs CEPH_FILE_SYSTEM_NAME  --subvol SUBVOLUME_NAME

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs cephfs --subvol subvol_1
    Schedule set for path /..

    Note

    START_TIME is represented in ISO 8601 format.

    This example creates a snapshot schedule for the subvolume path, snapshotting every hour, and starts on 27 June 2022 9:50 PM.

  3. Add a new retention policy for snapshot schedules of a CephFS subvolume:

    Syntax

    ceph fs snap-schedule retention add SUBVOLUME_VOLUME_PATH [COUNT_TIME_PERIOD_PAIR] TIME_PERIOD COUNT

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 14 1
    [ceph: root@host02 /]# ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. d 4 2
    [ceph: root@host02 /]# ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14h4w 3
    
    Retention added to path /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..

1
This example keeps 14 snapshots at least one hour apart.
2
This example keeps 4 snapshots at least one day apart.
3
This example keeps 14 hourly, and 4 weekly snapshots.
  1. List the snapshot schedules:

    Syntax

    ceph fs snap-schedule list SUBVOLUME_VOLUME_PATH [--format=plain|json] [--recursive=true]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule list / --recursive=true
    
    /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h

    This example lists all schedules in the directory tree.

  2. Check the status of a snapshot schedule:

    Syntax

    ceph fs snap-schedule status SUBVOLUME_VOLUME_PATH [--format=plain|json]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule status /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. --format=json
    
    {"fs": "cephfs", "subvol": "subvol_1", "path": "/volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..", "rel_path": "/..", "schedule": "4h", "retention": {"h": 14}, "start": "2022-05-16T14:00:00", "created": "2023-03-20T08:47:18", "first": null, "last": null, "last_pruned": null, "created_count": 0, "pruned_count": 0, "active": true}

    This example displays the status of the snapshot schedule for the /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path in JSON format.

10.5. Activating snapshot schedule for a Ceph File System

This section provides the steps to manually set the snapshot schedule to active for a Ceph File System (CephFS).

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  • Activate the snapshot schedule:

    Syntax

    ceph fs snap-schedule activate FILE_SYSTEM_VOLUME_PATH [REPEAT_INTERVAL]

    Example

    [ceph: root@host01 /]# ceph fs snap-schedule activate /cephfs

    This example activates all schedules for the CephFS /cephfs path.

10.6. Activating snapshot schedule for a Ceph File System sub volume

This section provides the steps to manually set the snapshot schedule to active for a Ceph File System (CephFS) sub volume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  • Activate the snapshot schedule:

    Syntax

    ceph fs snap-schedule activate SUB_VOLUME_PATH [REPEAT_INTERVAL]

    Example

    [ceph: root@host01 /]# ceph fs snap-schedule activate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..

    This example activates all schedules for the CephFS /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path.

10.7. Deactivating snapshot schedule for a Ceph File System

This section provides the steps to manually set the snapshot schedule to inactive for a Ceph File System (CephFS). This action will exclude the snapshot from scheduling until it is activated again.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • Snapshot schedule is created and is in active state.

Procedure

  • Deactivate a snapshot schedule for a CephFS path:

    Syntax

    ceph fs snap-schedule deactivate FILE_SYSTEM_VOLUME_PATH [REPEAT_INTERVAL]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule deactivate /cephfs 1d

    This example deactivates the daily snapshots for the /cephfs path, thereby pausing any further snapshot creation.

10.8. Deactivating snapshot schedule for a Ceph File System sub volume

This section provides the steps to manually set the snapshot schedule to inactive for a Ceph File System (CephFS) sub volume. This action will exclude the snapshot from scheduling until it is activated again.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • Snapshot schedule is created and is in active state.

Procedure

  • Deactivate a snapshot schedule for a CephFS sub volume path:

    Syntax

    ceph fs snap-schedule deactivate SUB_VOLUME_PATH [REPEAT_INTERVAL]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule deactivate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 1d

    This example deactivates the daily snapshots for the /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path, thereby pausing any further snapshot creation.

10.9. Removing a snapshot schedule for a Ceph File System

This section provides the step to remove snapshot schedule of a Ceph File System (CephFS).

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • Snapshot schedule is created.

Procedure

  1. Remove a specific snapshot schedule:

    Syntax

    ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH [REPEAT_INTERVAL] [START_TIME]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule remove /cephfs 4h 2022-05-16T14:00:00

    This example removes the specific snapshot schedule for the /cephfs volume, that is snapshotting every four hours, and started on 16 May 2022 2:00 PM.

  2. Remove all snapshot schedules for a specific CephFS volume path:

    Syntax

    ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule remove /cephfs

    This example removes all the snapshot schedules for the /cephfs volume path.

10.10. Removing a snapshot schedule for a Ceph File System sub volume

This section provides the step to remove snapshot schedule of a Ceph File System (CephFS) sub volume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • Snapshot schedule is created.

Procedure

  • Remove a specific snapshot schedule:

    Syntax

    ceph fs snap-schedule remove SUB_VOLUME_PATH [REPEAT_INTERVAL] [START_TIME]

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h 2022-05-16T14:00:00

    This example removes the specific snapshot schedule for the /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. volume, that is snapshotting every four hours, and started on 16 May 2022 2:00 PM.

10.11. Removing snapshot schedule retention policy for a Ceph File System

This section provides the step to remove the retention policy of the scheduled snapshots for a Ceph File System (CephFS).

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • Snapshot schedule created for a CephFS volume path.

Procedure

  • Remove a retention policy on a CephFS path:

    Syntax

    ceph fs snap-schedule retention remove FILE_SYSTEM_VOLUME_PATH [COUNT_TIME_PERIOD_PAIR] TIME_PERIOD COUNT

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule retention remove /cephfs h 4 1
    [ceph: root@host02 /]# ceph fs snap-schedule retention remove /cephfs 14d4w 2

    1
    This example removes 4 hourly snapshots.
    2
    This example removes 14 daily, and 4 weekly snapshots.

10.12. Removing snapshot schedule retention policy for a Ceph File System sub volume

This section provides the step to remove the retention policy of the scheduled snapshots for a Ceph File System (CephFS) sub volume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • Snapshot schedule created for a CephFS sub volume path.

Procedure

  • Remove a retention policy on a CephFS sub volume path:

    Syntax

    ceph fs snap-schedule retention remove SUB_VOLUME_PATH [COUNT_TIME_PERIOD_PAIR] TIME_PERIOD COUNT

    Example

    [ceph: root@host02 /]# ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 4 1
    [ceph: root@host02 /]# ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14d4w 2

    1
    This example removes 4 hourly snapshots.
    2
    This example removes 14 daily, and 4 weekly snapshots.

10.13. Additional Resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.