このコンテンツは選択した言語では利用できません。

Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes


As a storage administrator, you can use Red Hat’s Ceph Container Storage Interface (CSI) to manage Ceph File System (CephFS) exports. This also allows you to use other services, such as OpenStack’s file system service (Manila) by having a common command-line interface to interact with. The volumes module for the Ceph Manager daemon (ceph-mgr) implements the ability to export Ceph File Systems (CephFS).

The Ceph Manager volumes module implements the following file system export abstractions:

  • CephFS volumes
  • CephFS subvolume groups
  • CephFS subvolumes

4.1. Ceph File System volumes

As a storage administrator, you can create, list, and remove Ceph File System (CephFS) volumes. CephFS volumes are an abstraction for Ceph File Systems.

This section describes how to:

4.1.1. Creating a Ceph file system volume

Ceph Orchestrator is a module for Ceph Manager that creates a Metadata Server (MDS) for the Ceph File System (CephFS). This section describes how to create a CephFS volume.

Note

This creates the Ceph File System, along with the data and metadata pools.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  • Create a CephFS volume on the monitor node:

    Syntax

    ceph fs volume create VOLUME_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs volume create cephfs
    Copy to Clipboard Toggle word wrap

4.1.2. Listing Ceph file system volumes

This section describes the step to list the Ceph File system (CephFS) volumes.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume.

Procedure

  • List the CephFS volume:

    Example

    [ceph: root@host01 /]# ceph fs volume ls
    Copy to Clipboard Toggle word wrap

4.1.3. Viewing information about a Ceph file system volume

You can list basic details about a Ceph File System (CephFS) volume, such as attributes of data and metadata pools of the CephFS volume, pending subvolumes deletion count, and the like.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume created.

Procedure

  • View information about a CephFS volume:

    Syntax

    ceph fs volume info VOLUME_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs volume info cephfs
    {
        "mon_addrs": [
            "192.168.1.7:40977",
        ],
        "pending_subvolume_deletions": 0,
        "pools": {
            "data": [
                {
                    "avail": 106288709632,
                    "name": "cephfs.cephfs.data",
                    "used": 4096
                }
            ],
            "metadata": [
                {
                    "avail": 106288709632,
                    "name": "cephfs.cephfs.meta",
                    "used": 155648
                }
            ]
        },
        "used_size": 0
    }
    Copy to Clipboard Toggle word wrap

The output of the ceph fs volume info command includes:

  • mon_addrs: List of monitor addresses.
  • pending_subvolume_deletions: Number of subvolumes pending deletion.
  • pools: Attributes of data and metadata pools.

    • avail: The amount of free space available in bytes.
    • name: Name of the pool.
    • used: The amount of storage consumed in bytes.
  • used_size: Current used size of the CephFS volume in bytes.

4.1.4. Removing a Ceph file system volume

Ceph Orchestrator is a module for Ceph Manager that removes the Metadata Server (MDS) for the Ceph File System (CephFS). This section shows how to remove the Ceph File System (CephFS) volume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume.

Procedure

  1. If the mon_allow_pool_delete option is not set to true, then set it to true before removing the CephFS volume:

    Example

    [ceph: root@host01 /]# ceph config set mon mon_allow_pool_delete true
    Copy to Clipboard Toggle word wrap

  2. Remove the CephFS volume:

    Syntax

    ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs volume rm cephfs --yes-i-really-mean-it
    Copy to Clipboard Toggle word wrap

4.2. Ceph File System subvolume groups

As a storage administrator, you can create, list, fetch absolute path, and remove Ceph File System (CephFS) subvolume groups. CephFS subvolume groups are abstractions at a directory level which effects policies, for example, file layouts, across a set of subvolumes.

Starting with Red Hat Ceph Storage 5.0, the subvolume group snapshot feature is not supported. You can only list and remove the existing snapshots of these subvolume groups.

This section describes how to:

4.2.1. Creating a file system subvolume group

This section describes how to create a Ceph File System (CephFS) subvolume group.

Note

When creating a subvolume group, you can specify its data pool layout, uid, gid, and file mode in octal numerals. By default, the subvolume group is created with an octal file mode ‘755’, uid ‘0’, gid ‘0’, and data pool layout of its parent directory.

Note

See Setting and managing quotas on a file system subvolume group to set quotas while creating a subvolume group.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System deployed.
  • At a minimum read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  • Create a CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup create cephfs subgroup0
    Copy to Clipboard Toggle word wrap

    The command succeeds even if the subvolume group already exists.

4.2.2. Setting and managing quotas on a file system subvolume group

This section describes how to set and manage quotas on a Ceph File System (CephFS) subvolume group.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  1. Set quotas while creating a subvolume group by providing size in bytes:

    Syntax

    ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--size SIZE_IN_BYTES] [--pool_layout DATA_POOL_NAME] [--uid UID] [--gid GID] [--mode OCTAL_MODE]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240
    Copy to Clipboard Toggle word wrap

  2. Resize a subvolume group:

    Syntax

    ceph fs subvolumegroup resize VOLUME_NAME GROUP_NAME new_size [--no_shrink]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240
    [
        {
            "bytes_used": 10768679044
        },
        {
            "bytes_quota": 20737418240
        },
        {
            "bytes_pcent": "51.93"
        }
    ]
    Copy to Clipboard Toggle word wrap

  3. Fetch the metadata of a subvolume group:

    Syntax

    ceph fs subvolumegroup info VOLUME_NAME GROUP_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup info cephfs subvolgroup_2
    {
        "atime": "2022-10-05 18:00:39",
        "bytes_pcent": "51.85",
        "bytes_quota": 20768679043,
        "bytes_used": 10768679044,
        "created_at": "2022-10-05 18:00:39",
        "ctime": "2022-10-05 18:21:26",
        "data_pool": "cephfs.cephfs.data",
        "gid": 0,
        "mode": 16877,
        "mon_addrs": [
            "60.221.178.236:1221",
            "205.64.75.112:1221",
            "20.209.241.242:1221"
        ],
        "mtime": "2022-10-05 18:01:25",
        "uid": 0
    }
    Copy to Clipboard Toggle word wrap

4.2.3. Listing file system subvolume groups

This section describes the step to list the Ceph File System (CephFS) subvolume groups.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.

Procedure

  • List the CephFS subvolume groups:

    Syntax

    ceph fs subvolumegroup ls VOLUME_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup ls cephfs
    Copy to Clipboard Toggle word wrap

4.2.4. Fetching absolute path of a file system subvolume group

This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume group.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.

Procedure

  • Fetch the absolute path of the CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup getpath cephfs subgroup0
    Copy to Clipboard Toggle word wrap

4.2.5. Listing snapshots of a file system subvolume group

This section provides the steps to list the snapshots of a Ceph File System (CephFS) subvolume group.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.
  • Snapshots of the subvolume group.

Procedure

  • List the snapshots of a CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup snapshot ls cephfs subgroup0
    Copy to Clipboard Toggle word wrap

4.2.6. Removing snapshot of a file system subvolume group

This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group.

Note

Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A Ceph File System volume.
  • A snapshot of the subvolume group.

Procedure

  • Remove the snapshot of the CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force
    Copy to Clipboard Toggle word wrap

4.2.7. Removing a file system subvolume group

This section shows how to remove the Ceph File System (CephFS) subvolume group.

Note

The removal of a subvolume group fails if it is not empty or non-existent. The --force flag allows the non-existent subvolume group to be removed.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.

Procedure

  • Remove the CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolumegroup rm cephfs subgroup0 --force
    Copy to Clipboard Toggle word wrap

4.3. Ceph File System subvolumes

As a storage administrator, you can create, list, fetch absolute path, fetch metadata, and remove Ceph File System (CephFS) subvolumes. Additionally, you can also create, list, and remove snapshots of these subvolumes. CephFS subvolumes are an abstraction for independent Ceph File Systems directory trees.

This section describes how to:

4.3.1. Creating a file system subvolume

This section describes how to create a Ceph File System (CephFS) subvolume.

Note

When creating a subvolume, you can specify its subvolume group, data pool layout, uid, gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a separate RADOS namespace by specifying the --namespace-isolated option. By default, a subvolume is created within the default subvolume group, and with an octal file mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit.

Prerequisites

  • A working Red Hat Ceph Storage cluster with a Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  • Create a CephFS subvolume:

    Syntax

    ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE] [--namespace-isolated]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated
    Copy to Clipboard Toggle word wrap

    The command succeeds even if the subvolume already exists.

4.3.2. Listing file system subvolume

This section describes the step to list the Ceph File System (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  • List the CephFS subvolume:

    Syntax

    ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume ls cephfs --group_name subgroup0
    Copy to Clipboard Toggle word wrap

4.3.3. Resizing a file system subvolume

This section describes the step to resize the Ceph File System (CephFS) subvolume.

Note

The ceph fs subvolume resize command resizes the subvolume quota using the size specified by new_size. The --no_shrink flag prevents the subvolume from shrinking below the currently used size of the subvolume. The subvolume can be resized to an infinite size by passing inf or infinite as the new_size.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  • Resize a CephFS subvolume:

    Syntax

    ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME] [--no_shrink]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink
    Copy to Clipboard Toggle word wrap

4.3.4. Fetching absolute path of a file system subvolume

This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  • Fetch the absolute path of the CephFS subvolume:

    Syntax

    ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name _SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume getpath cephfs sub0 --group_name subgroup0
    Copy to Clipboard Toggle word wrap

4.3.5. Fetching metadata of a file system subvolume

This section shows how to fetch metadata of a Ceph File System (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  • Fetch the metadata of a CephFS subvolume:

    Syntax

    ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume info cephfs sub0 --group_name subgroup0
    Copy to Clipboard Toggle word wrap

    Example output

    # ceph fs subvolume info cephfs sub0
    {
        "atime": "2023-07-14 08:52:46",
        "bytes_pcent": "0.00",
        "bytes_quota": 1024000000,
        "bytes_used": 0,
        "created_at": "2023-07-14 08:52:46",
        "ctime": "2023-07-14 08:53:54",
        "data_pool": "cephfs.cephfs.data",
        "features": [
            "snapshot-clone",
            "snapshot-autoprotect",
            "snapshot-retention"
        ],
        "flavor": "2",
        "gid": 0,
        "mode": 16877,
        "mon_addrs": [
            "10.0.208.172:6789",
            "10.0.211.197:6789",
            "10.0.209.212:6789"
        ],
        "mtime": "2023-07-14 08:52:46",
        "path": "/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3",
        "pool_namespace": "",
        "state": "complete",
        "type": "subvolume",
        "uid": 0
    }
    Copy to Clipboard Toggle word wrap

The output format is JSON and contains the following fields:

  • atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
  • bytes_pcent: quota used in percentage if quota is set, else displays "undefined".
  • bytes_quota: quota size in bytes if quota is set, else displays "infinite".
  • bytes_used: current used size of the subvolume in bytes.
  • created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS".
  • ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
  • data_pool: data pool the subvolume belongs to.
  • features: features supported by the subvolume, such as , "snapshot-clone", "snapshot-autoprotect", or "snapshot-retention".
  • flavor: subvolume version, either 1 for version one or 2 for version two.
  • gid: group ID of subvolume path.
  • mode: mode of subvolume path.
  • mon_addrs: list of monitor addresses.
  • mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
  • path: absolute path of a subvolume.
  • pool_namespace: RADOS namespace of the subvolume.
  • state: current state of the subvolume, such as, "complete" or "snapshot-retained".
  • type: subvolume type indicating whether it is a clone or subvolume.
  • uid: user ID of subvolume path.

4.3.6. Creating snapshot of a file system subvolume

This section shows how to create snapshots of a Ceph File System (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.
  • In addition to read (r) and write (w) capabilities, clients also require s flag on a directory path within the file system.

Procedure

  1. Verify that the s flag is set on the directory:

    Syntax

    ceph auth get CLIENT_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph auth get client.0
    [client.0]
        key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
        caps mds = "allow rw, allow rws path=/bar" 
    1
    
        caps mon = "allow r"
        caps osd = "allow rw tag cephfs data=cephfs_a" 
    2
    Copy to Clipboard Toggle word wrap

    1 2
    In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a.
  2. Create a snapshot of the Ceph File System subvolume:

    Syntax

    ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0
    Copy to Clipboard Toggle word wrap

4.3.7. Cloning subvolumes from snapshots

Subvolumes can be created by cloning subvolume snapshots. It is an asynchronous operation involving copying data from a snapshot to a subvolume.

Note

Cloning is inefficient for very large data sets.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • To create or delete snapshots, in addition to read and write capability, clients require s flag on a directory path within the filesystem.

    Syntax

    CLIENT_NAME
        key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
        caps mds = allow rw, allow rws path=DIRECTORY_PATH
        caps mon = allow r
        caps osd = allow rw tag cephfs data=DIRECTORY_NAME
    Copy to Clipboard Toggle word wrap

    In the following example, client.0 can create or delete snapshots in the bar directory of filesystem cephfs_a.

    Example

    [client.0]
        key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
        caps mds = "allow rw, allow rws path=/bar"
        caps mon = "allow r"
        caps osd = "allow rw tag cephfs data=cephfs_a"
    Copy to Clipboard Toggle word wrap

Procedure

  1. Create a Ceph File System (CephFS) volume:

    Syntax

    ceph fs volume create VOLUME_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs volume create cephfs
    Copy to Clipboard Toggle word wrap

    This creates the CephFS file system, its data and metadata pools.

  2. Create a subvolume group. By default, the subvolume group is created with an octal file mode '755', and data pool layout of its parent directory.

    Syntax

    ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolumegroup create cephfs subgroup0
    Copy to Clipboard Toggle word wrap

  3. Create a subvolume. By default, a subvolume is created within the default subvolume group, and with an octal file mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit.

    Syntax

    ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume create cephfs sub0 --group_name subgroup0
    Copy to Clipboard Toggle word wrap

  4. Create a snapshot of a subvolume:

    Syntax

    ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0  --group_name subgroup0
    Copy to Clipboard Toggle word wrap

  5. Initiate a clone operation:

    Note

    By default, cloned subvolumes are created in the default group.

    1. If the source subvolume and the target clone are in the default group, run the following command:

      Syntax

      ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME
      Copy to Clipboard Toggle word wrap

      Example

      [root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0
      Copy to Clipboard Toggle word wrap

    2. If the source subvolume is in the non-default group, then specify the source subvolume group in the following command:

      Syntax

      ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME
      Copy to Clipboard Toggle word wrap

      Example

      [root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0
      Copy to Clipboard Toggle word wrap

    3. If the target clone is to a non-default group, then specify the target group in the following command:

      Syntax

      ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME
      Copy to Clipboard Toggle word wrap

      Example

      [root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1
      Copy to Clipboard Toggle word wrap

  6. Check the status of the clone operation:

    Syntax

    ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs clone status cephfs clone0 --group_name subgroup1
    
    {
      "status": {
        "state": "complete"
      }
    }
    Copy to Clipboard Toggle word wrap

Additional Resources

4.3.8. Listing snapshots of a file system subvolume

This section provides the step to list the snapshots of a Ceph File system (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.
  • Snapshots of the subvolume.

Procedure

  • List the snapshots of a CephFS subvolume:

    Syntax

    ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0
    Copy to Clipboard Toggle word wrap

4.3.9. Fetching metadata of the snapshots of a file system subvolume

This section provides the step to fetch the metadata of the snapshots of a Ceph File System (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with CephFS deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.
  • Snapshots of the subvolume.

Procedure

  1. Fetch the metadata of the snapshots of a CephFS subvolume:

    Syntax

    ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0
    Copy to Clipboard Toggle word wrap

    Example output

    {
        "created_at": "2022-05-09 06:18:47.330682",
        "data_pool": "cephfs_data",
        "has_pending_clones": "no",
        "size": 0
    }
    Copy to Clipboard Toggle word wrap

The output format is JSON and contains the following fields:

  • created_at: time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff".
  • data_pool: data pool the snapshot belongs to.
  • has_pending_clones: "yes" if snapshot clone is in progress otherwise "no".
  • size: snapshot size in bytes.

4.3.10. Removing a file system subvolume

This section describes the step to remove the Ceph File System (CephFS) subvolume.

Note

The ceph fs subvolume rm command removes the subvolume and its contents in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its contents.

A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-snapshots option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the subvolume, or cloned to a newer subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  1. Remove a CephFS subvolume:

    Syntax

    ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME] [--force] [--retain-snapshots]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots
    Copy to Clipboard Toggle word wrap

  2. To recreate a subvolume from a retained snapshot:

    Syntax

    ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME
    Copy to Clipboard Toggle word wrap

    • NEW_SUBVOLUME can either be the same subvolume which was deleted earlier or clone it to a new subvolume.

    Example

    [root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0
    Copy to Clipboard Toggle word wrap

4.3.11. Removing snapshot of a file system subvolume

This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group.

Note

Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A Ceph File System volume.
  • A snapshot of the subvolume group.

Procedure

  • Remove the snapshot of the CephFS subvolume:

    Syntax

    ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]
    Copy to Clipboard Toggle word wrap

    Example

    [root@mon ~]# ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force
    Copy to Clipboard Toggle word wrap

4.4. Metadata information on Ceph File System subvolumes

As a storage administrator, you can set, get, list, and remove metadata information of Ceph File System (CephFS) subvolumes.

The custom metadata is for users to store their metadata in subvolumes. Users can store the key-value pairs similar to xattr in a Ceph File System.

4.4.1. Viewing subvolume metrics for CephFS metadata server clients

To access subvolume-level metrics, a client can mount a Ceph File System (CephFS) or a specific subvolume.

To access client-specific metrics, a user mounts the complete CephFS. This exposes client-specific metrics of all the operations at the CephFS volume level.

Note

When a user mounts an entire CephFS, it leads to performance degradation of the specific file system. To avoid this, users should perform a subvolume level mount.

Using a subvolume level mount has the following benefits:

  • Improving the file system performance.
  • Enabling clients to identify and isolate their operations to specific subvolume paths.

Three sets of metrics—IOPS, throughput, and latency (read and write) are implemented at the subvolume level. These subvolume metrics are collected from all Metadata Servers (MDSs), shared with the metrics aggregator, and aggregated within a sliding window.

4.4.1.1. Sliding window

A sliding window is a fixed-size time interval that moves forward incrementally. In CephFS, the sliding window monitors continuous streams of file system activity and performance data. A subvolume is considered active if operations occur within the sliding window. The default window size is 30 seconds.

Note

If no activity is detected during this interval, the subvolume is considered inactive.

The subvolume states are active or inactive, each impacting on the metrics generation.

Active
The subvolume generates metrics immediately.
Inactive
The subvolume does not generate any metrics and no value is reported.
4.4.1.1.1. Sliding window interval for subvolume metrics

The sliding window algorithm includes the the sbuv_metrics_window_interval output value. Both the default and minimum values are defined at 30 seconds.

Note
  • The sliding windows is only available for subvolume metrics, not for client metrics.
  • An administrator can configure different default sliding window values for different file systems.
name: subv_metrics_window_interval
type: secs
level: dev
desc: subvolume metrics sliding window interval, seconds
long_desc: interval in seconds to hold values in sliding window for subvolume metrics
default: 30
min: 30
services:
- mds
Copy to Clipboard Toggle word wrap

4.4.1.2. Getting subvolume metrics

CephFS exports aggregated metrics per subvolume using the client asok dump_subvolume_metrics command. To view all metrics, including the mds_subvolume_metrics section, run the following command:

ceph tell mds.cephfs.ceph-gene-amk-diyngp-node4.paipgc counter dump
Copy to Clipboard Toggle word wrap

CephFS exports gauge subvolume metrics, as defined in the following table.

Expand
Table 4.1. Subvolume performance metrics exported
NameDescription

avg_read_iops

Average read IOPS (input/output operations per second) over the sliding window.

avg_read_tp_Bps

Average read throughput in bytes per second.

avg_read_lat_msec

Average read latency in milliseconds.

avg_write_iops

Average write IOPS over the sliding window.

avg_write_tp_Bps

Average write throughput in bytes per second.

avg_write_lat_msec

Average write latency in milliseconds.

The subvolume metrics are dumped as a part of the same command. The mds_subvolume_metrics section in the output of counter dump command displays the metrics for each client.

Example,

{
  "mds_subvolume_metrics": [
    {
      "labels": {
        "fs_name": "a",
        "subvolume_path": "/volumes/_nogroup/test_subvolume"
      },
      "counters": {
        "avg_read_iops": 0,
        "avg_read_tp_Bps": 11,
        "avg_read_lat_msec": 0,
        "avg_write_iops": 1564,
        "avg_write_tp_Bps": 6408316,
        "avg_write_lat_msec": 338,
        "last_window_end_sec": 0,
        "last_window_dur_sec": 10
      }
    }
  ]
}
Copy to Clipboard Toggle word wrap

Expand
Table 4.2. Subvolume performance metrics description
NameTypeValueDescription

mds_subvolume_metrics

Array

N/A

List of subvolume metric objects.

labels

Object

N/A

Metadata identifying the file system and subvolume path.

fs_name

String

a

Name of the file system.

subvolume_path

String

/volumes/_nogroup/test_subvolume

Path of the subvolume within the file system.

counters

Object

N/A

Performance metrics for the subvolume.

avg_read_iops

Gauge

0

Average read IOPS over the sliding window.

avg_read_tp_Bps

Gauge

11

Average read throughput in bytes per second.

avg_read_lat_msec

Gauge

0

Average read latency in milliseconds.

avg_write_iops

Gauge

1564

Average write IOPS over the sliding window.

avg_write_tp_Bps

Gauge

6408316

Average write throughput in bytes per second.

avg_write_lat_msec

Gauge

338

Average write latency in milliseconds.

last_window_end_sec

Gauge

0

End time of the last measurement window in seconds since the epoch.

last_window_dur_sec

Gauge

10

Duration of the last measurement window in seconds.

4.4.2. Setting custom metadata on the file system subvolume

You can set custom metadata on the file system subvolume as a key-value pair.

Note

If the key_name already exists then the old value is replaced by the new value.

Note

The KEY_NAME and VALUE should be a string of ASCII characters as specified in python’s string.printable. The KEY_NAME is case-insensitive and is always stored in lower case.

Important

Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • A Ceph File System (CephFS), CephFS volume, subvolume group, and subvolume created.

Procedure

  1. Set the metadata on the CephFS subvolume:

    Syntax

    ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0
    Copy to Clipboard Toggle word wrap

  2. Optional: Set the custom metadata with a space in the KEY_NAME:

    Example

    [ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 "test meta" cluster --group_name subgroup0
    Copy to Clipboard Toggle word wrap

    This creates another metadata with KEY_NAME as test meta for the VALUE cluster.

  3. Optional: You can also set the same metadata with a different value:

    Example

    [ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 "test_meta" cluster2 --group_name subgroup0
    Copy to Clipboard Toggle word wrap

4.4.3. Getting custom metadata on the file system subvolume

You can get the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • A CephFS volume, subvolume group, and subvolume created.
  • A custom metadata created on the CephFS subvolume.

Procedure

  • Get the metadata on the CephFS subvolume:

    Syntax

    ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0
    
    cluster
    Copy to Clipboard Toggle word wrap

4.4.4. Listing custom metadata on the file system subvolume

You can list the custom metadata associated with the key of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • A CephFS volume, subvolume group, and subvolume created.
  • A custom metadata created on the CephFS subvolume.

Procedure

  • List the metadata on the CephFS subvolume:

    Syntax

    ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolume metadata ls cephfs sub0
    {
        "test_meta": "cluster"
    }
    Copy to Clipboard Toggle word wrap

4.4.5. Removing custom metadata from the file system subvolume

You can remove the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • A CephFS volume, subvolume group, and subvolume created.
  • A custom metadata created on the CephFS subvolume.

Procedure

  1. Remove the custom metadata on the CephFS subvolume:

    Syntax

    ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0
    Copy to Clipboard Toggle word wrap

  2. List the metadata:

    Example

    [ceph: root@host01 /]# ceph fs subvolume metadata ls cephfs sub0
    
    {}
    Copy to Clipboard Toggle word wrap

Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2026 Red Hat
トップに戻る