Chapter 5. Management of Ceph File System volumes, sub-volumes, and sub-volume groups


As a storage administrator, you can use Red Hat’s Ceph Container Storage Interface (CSI) to manage Ceph File System (CephFS) exports. This also allows you to use other services, such as OpenStack’s file system service (Manila) by having a common command-line interface to interact with. The volumes module for the Ceph Manager daemon (ceph-mgr) implements the ability to export Ceph File Systems (CephFS).

The Ceph Manager volumes module implements the following file system export abstractions:

  • CephFS volumes
  • CephFS subvolume groups
  • CephFS subvolumes

This chapter describes how to work with:

5.1. Ceph File System volumes

As a storage administrator, you can create, list, and remove Ceph File System (CephFS) volumes. CephFS volumes are an abstraction for Ceph File Systems.

This section describes how to:

5.1.1. Creating a file system volume

Ceph Manager’s orchestrator module creates a Meta Data Server (MDS) for the Ceph File System (CephFS). This section describes how to create CephFS volume.

Note

This creates the Ceph File System, along with the data and metadata pools.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  1. Create a CephFS volume:

    Syntax

    ceph fs volume create VOLUME_NAME

    Example

    [root@mon ~]# ceph fs volume create cephfs

5.1.2. Listing file system volume

This section describes the step to list the Ceph File system (CephFS) volumes.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume.

Procedure

  1. List the CephFS volume:

    Example

    [root@mon ~]# ceph fs volume ls

5.1.3. Removing a file system volume

Ceph Manager’s orchestrator module removes the Meta Data Server (MDS) for the Ceph File System (CephFS). This section shows how to remove the Ceph File system (CephFS) volume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume.

Procedure

  1. Remove the CephFS volume:

    Syntax

    ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]

    Example

    [root@mon ~]# ceph fs volume rm cephfs --yes-i-really-mean-it

5.2. Ceph File System subvolumes

As a storage administrator, you can create, list, fetch absolute path, fetch metadata, and remove Ceph File System (CephFS) subvolumes.

You can also authorize Ceph client users for CephFS subvolumes. Additionally, you can also create, list and remove snapshots of these subvolumes. CephFS subvolumes are an abstraction for independent Ceph File Systems directory trees.

This section describes how to:

5.2.1. Creating a file system subvolume

This section describes how to create Ceph File system (CephFS) subvolume.

Note

When creating a subvolume you can specify its subvolume group, data pool layout, uid, gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a separate RADOS namespace by specifying`--namespace-isolated` option. By default a subvolume is created within the default subvolume group, and with an octal file mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory and no size limit.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  1. Create a CephFS subvolume:

    Syntax

    ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE] [--namespace-isolated]

    Example

    [root@mon ~]# ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated

    The command succeeds even if the subvolume already exists.

5.2.2. Listing file system subvolume

This section describes the step to list the Ceph File system (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  1. List the CephFS subvolume:

    Syntax

    ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume ls cephfs --group_name subgroup0

5.2.3. Authorizing Ceph client users for File System subvolumes

Red Hat Ceph Storage cluster uses cephx for authentication, which is enabled by default. To use cephx with the Ceph File System (CephFS) subvolumes, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System is mounted. You can authorize the user to access the CephFS subvolumes using the authorize command.

Prerequisites

  • A working Red Hat Ceph Storage cluster with CephFS deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume created.

Procedure

  1. Create a CephFS subvolume:

    Syntax

    ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE] [--namespace-isolated]

    Example

    [root@mon ~]# ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated

    The command succeeds even if the subvolume already exists.

  2. Authorize the Ceph client user,with either read or write access to CephFS subvolumes:

    Syntax

    ceph fs subvolume authorize VOLUME_NAME SUBVOLUME_NAME AUTH_ID [--group_name=GROUP_NAME] [--access_level=ACCESS_LEVEL]

    The ACCESS_LEVEL can be either r or rw and AUTH_ID is the Ceph client user, which is a string.

    Example

    [root@mon ~]# ceph fs subvolume authorize cephfs sub0 guest --group_name=subgroup0 --access_level=rw

    In this example, the 'client.guest' is authorized to access subvolume sub0 in the subvolume group subgroup0.

Additional Resources

5.2.4. Deauthorizing Ceph client users for File System subvolumes

You can deauthorize the user to access the Ceph File System (CephFS) subvolumes using the deauthorize command.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume and subvolume created.
  • Ceph client users authorized to access CephFS subvolumes.

Procedure

  • Deauthorize the Ceph client user’s access to CephFS subvolumes:

    Syntax

    ceph fs subvolume deauthorize VOLUME_NAME SUBVOLUME_NAME AUTH_ID [--group_name=GROUP_NAME]

    The AUTH_ID is the Ceph client user, which is a string.

    Example

    [root@mon ~]# ceph fs subvolume deauthorize cephfs sub0 guest --group_name=subgroup0

    In this example, the 'client.guest' is deauthorized to access subvolume sub0 in the subvolume group subgroup0.

Additional Resources

5.2.5. Listing Ceph client users for File System subvolumes

You can list the user’s access to the Ceph File System (CephFS) subvolumes using the authorized_list command.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume and subvolume created.
  • Ceph client users authorized to access CephFS subvolumes.

Procedure

  • List the Ceph client user’s access to CephFS subvolumes:

    Syntax

    ceph fs subvolume authorized_list VOLUME_NAME SUBVOLUME_NAME [--group_name=GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume authorized_list cephfs sub0 --group_name=subgroup0
    [
        {
            "guest": "rw"
        }
    ]

Additional Resources

5.2.6. Evicting Ceph client users from File System subvolumes

You can evict the Ceph client user from the Ceph File System (CephFS) subvolumes using the evict command based on the _AUTH_ID and the subvolume mounted.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS volume and subvolume created.
  • Ceph client users authorized to access CephFS subvolumes.

Procedure

  • Evict the Ceph client user from the CephFS subvolumes:

    Syntax

    ceph fs subvolume evict VOLUME_NAME SUBVOLUME_NAME AUTH_ID [--group_name=GROUP_NAME]

    The AUTH_ID is the Ceph client user, which is a string.

    Example

    [root@mon ~]# ceph fs subvolume evict cephfs sub0 guest --group_name=subgroup0

    In this example, the 'client.guest' is evicted from the subvolumegroup subgroup0.

Additional Resources

5.2.7. Resizing a file system subvolume

This section describes the step to resize the Ceph File system (CephFS) subvolume.

Note

The ceph fs subvolume resize command resizes the subvolume quota using the size specified by new_size. The --no_shrink flag prevents the subvolume to shrink below the current used size of the subvolume. The subvolume can be resized to an infinite size by passing inf or infinite as the new_size.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  1. Resize a CephFS subvolume:

    Syntax

    ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME_ NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME] [--no_shrink]

    Example

    [root@mon ~]# ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink

5.2.8. Fetching absolute path of a file system subvolume

This section shows how to fetch the absolute path of a Ceph File system (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  1. Fetch the absolute path of the CephFS subvolume:

    Syntax

    ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume getpath cephfs sub0 --group_name subgroup0
    
    /volumes/subgroup0/sub0/c10cc8b8-851d-477f-99f2-1139d944f691

5.2.9. Fetching metadata of a file system subvolume

This section shows how to fetch metadata of a Ceph File system (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  1. Fetch the metadata of a CephFS subvolume:

    Syntax

    ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume info cephfs sub0 --group_name subgroup0

    Example output

    {
        "atime": "2020-09-08 09:27:15",
        "bytes_pcent": "undefined",
        "bytes_quota": "infinite",
        "bytes_used": 0,
        "created_at": "2020-09-08 09:27:15",
        "ctime": "2020-09-08 09:27:15",
        "data_pool": "cephfs_data",
        "features": [
            "snapshot-clone",
            "snapshot-autoprotect",
            "snapshot-retention"
        ],
        "gid": 0,
        "mode": 16877,
        "mon_addrs": [
            "10.8.128.22:6789",
            "10.8.128.23:6789",
            "10.8.128.24:6789"
        ],
        "mtime": "2020-09-08 09:27:15",
        "path": "/volumes/subgroup0/sub0/6d01a68a-e981-4ebe-84ca-96b660879173",
        "pool_namespace": "",
        "state": "complete",
        "type": "subvolume",
        "uid": 0
    }

The output format is a json and contains the following fields:

  • atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
  • mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
  • ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
  • uid: uid of subvolume path.
  • gid: gid of subvolume path.
  • mode: mode of subvolume path.
  • mon_addrs: list of monitor addresses.
  • bytes_pcent: quota used in percentage if quota is set, else displays "undefined".
  • bytes_quota: quota size in bytes if quota is set, else displays "infinite".
  • bytes_used: current used size of the subvolume in bytes.
  • created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS".
  • data_pool: data pool the subvolume belongs to.
  • path: absolute path of a subvolume.
  • type: subvolume type indicating whether it’s clone or subvolume.
  • pool_namespace: RADOS namespace of the subvolume.
  • features: features supported by the subvolume, such as , "snapshot-clone", "snapshot-autoprotect", or "snapshot-retention".
  • state: current state of the subvolume, such as, "complete" or "snapshot-retained"

5.2.10. Creating snapshot of a file system subvolume

This section shows how to create snapshots of Ceph File System (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.
  • In addition to read (r) and write (w) capabilities, clients also require s flag on a directory path within the file system.

Procedure

  1. Verify that the s flag is set on the directory:

    Syntax

    ceph auth get CLIENT_NAME

    Example

    client.0
        key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
        caps: [mds] allow rw, allow rws path=/bar 1
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs data=cephfs_a 2

    1 2
    In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a.
  2. Create a snapshot of the Ceph File System subvolume:

    Syntax

    ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME _SNAP_NAME [--group_name GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0

5.2.11. Cloning subvolumes from snapshots

Subvolumes can be created by cloning subvolume snapshots. It is an asynchronous operation involving copying data from a snapshot to a subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • To create or delete snapshots, in addition to read and write capability, clients require s flag on a directory path within the filesystem.

    Syntax

    CLIENT_NAME
        key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
        caps: [mds] allow rw, allow rws path=DIRECTORY_PATH
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs data=DIRECTORY_NAME

    In the following example, client.0 can create or delete snapshots in the bar directory of filesystem cephfs_a.

    Example

    client.0
        key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
        caps: [mds] allow rw, allow rws path=/bar
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs data=cephfs_a

Procedure

  1. Create a Ceph File System (CephFS) volume:

    Syntax

    ceph fs volume create VOLUME_NAME

    Example

    [root@mon ~]# ceph fs volume create cephfs

    This creates the CephFS file system, its data and metadata pools.

  2. Create a subvolume group. By default, the subvolume group is created with an octal file mode '755', and data pool layout of its parent directory.

    Syntax

    ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE]

    Example

    [root@mon ~]# ceph fs subvolumegroup create cephfs subgroup0

  3. Create a subvolume. By default, a subvolume is created within the default subvolume group, and with an octal file mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory and no size limit.

    Syntax

    ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE]

    Example

    [root@mon ~]# ceph fs subvolume create cephfs sub0 --group_name subgroup0

  4. Create a snapshot of a subvolume:

    Syntax

    ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0  --group_name subgroup0

  5. Initiate a clone operation:

    Note

    By default, cloned subvolumes are created in the default group.

    1. If the source subvolume and the target clone are in the default group, run the following command:

      Syntax

      ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_SUBVOLUME_NAME

      Example

      [root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0

    2. If the source subvolume is in the non-default group, then specify the source subvolume group in the following command:

      Syntax

      ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_SUBVOLUME_NAME --group_name SUBVOLUME_GROUP_NAME

      Example

      [root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0

    3. If the target clone is to a non-default group, then specify the target group in the following command:

      Syntax

      ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_SUBVOLUME_NAME --target_group_name _SUBVOLUME_GROUP_NAME

      Example

      [root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1

  6. Check the status of the clone operation:

    Syntax

    ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME]

    Example

    [root@mon ~]# ceph fs clone status cephfs clone0 --group_name subgroup1
    
    {
      "status": {
        "state": "complete"
      }
    }

Additional Resources

5.2.12. Listing snapshots of a file system subvolume

This section provides the step to list the snapshots of a Ceph File system (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.
  • Snapshots of the subvolume.

Procedure

  1. List the snapshots of a CephFS subvolume:

    Syntax

    ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0

5.2.13. Fetching metadata of the snapshots of a file system subvolume

This section provides the step to fetch the metadata of the snapshots of a Ceph File system (CephFS) subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with CephFS deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.
  • Snapshots of the subvolume.

Procedure

  1. Fetch the metadata of the snapshots of a CephFS subvolume:

    Syntax

    ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME]

    Example

    [root@mon ~]# ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0

    Example output

    {
        "created_at": "2021-09-08 06:18:47.330682",
        "data_pool": "cephfs_data",
        "has_pending_clones": "no",
        "size": 0
    }

The output format is json and contains the following fields:

  • created_at: time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff".
  • data_pool: data pool the snapshot belongs to.
  • has_pending_clones: "yes" if snapshot clone is in progress otherwise "no".
  • size: snapshot size in bytes.

5.2.14. Removing a file system subvolume

This section describes the step to remove the Ceph File system (CephFS) subvolume.

Note

The ceph fs subvolume rm command removes the subvolume and its contents in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its contents.

A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-snapshots option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the subvolume, or cloned to a newer subvolume.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume.

Procedure

  1. Remove a CephFS subvolume:

    Syntax

    ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME] [--force] [--retain-snapshots]

    Example

    [root@mon ~]# ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain snapshots

  2. To recreate a subvolume from a retained snapshot:

    Syntax

    ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME

    *NEW_SUBVOLUME - can either be the same subvolume which was deleted earlier or clone it to a new subvolume.

    Example

    ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0

5.2.15. Removing snapshot of a file system subvolume

This section provides the step to remove snapshots of a Ceph File system (CephFS) subvolume group.

Note

Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A Ceph File System volume.
  • A snapshot of the subvolume group.

Procedure

  1. Remove the snapshot of the CephFS subvolume:

    Syntax

    ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME _SNAP_NAME [--group_name GROUP_NAME --force]

    Example

    [root@mon ~]# ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force

5.3. Ceph File System subvolume groups

As a storage administrator, you can create, list, fetch absolute path, and remove Ceph File System (CephFS) subvolume groups. Additionally, you can also create, list and remove snapshots of these subvolume groups. CephFS subvolume groups are abstractions at a directory level which effects policies, for example, file layouts, across a set of subvolumes.

This section describes how to:

5.3.1. Creating a file system subvolume group

This section describes how to create Ceph File system (CephFS) subvolume group.

Note

When creating a subvolume group you can specify its data pool layout, uid, gid, and file mode in octal numerals. By default, the subvolume group is created with an octal file mode ‘755’, uid ‘0’, gid ‘0’ and data pool layout of its parent directory.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.

Procedure

  1. Create a CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE]

    Example

    [root@mon ~]# ceph fs subvolumegroup create cephfs subgroup0

    The command succeeds even if the subvolume group already exists.

5.3.2. Listing file system subvolume groups

This section describes the step to list the Ceph File system (CephFS) subvolume groups.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.

Procedure

  1. List the CephFS subvolume groups:

    Syntax

    ceph fs subvolumegroup ls VOLUME_NAME

    Example

    [root@mon ~]# ceph fs subvolumegroup ls cephfs

5.3.3. Fetching absolute path of a file system subvolume group

This section shows how to fetch the absolute path of a Ceph File system (CephFS) subvolume group.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.

Procedure

  1. Fetch the absolute path of the CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME

    Example

    [root@mon ~]# ceph fs subvolumegroup getpath cephfs subgroup0
    
    /volumes/subgroup0

5.3.4. Creating snapshot of a file system subvolume group

This section shows how to create snapshots of Ceph File system (CephFS) subvolume group.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • CephFS subvolume group.
  • In addition to read (r) and write (w) capabilities, clients also require s flag on a directory path within the file system.

Procedure

  1. Verify that the s flag is set on the directory:

    Syntax

    ceph auth get CLIENT_NAME

    Example

    client.0
        key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
        caps: [mds] allow rw, allow rws path=/bar 1
        caps: [mon] allow r
        caps: [osd] allow rw tag cephfs data=cephfs_a 2

    1 2
    In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a.
  2. Create a snapshot of the CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup snapshot create VOLUME_NAME _GROUP_NAME SNAP_NAME

    Example

    [root@mon ~]# ceph fs subvolumegroup snapshot create cephfs subgroup0 snap0

    The command implicitly snapshots all the subvolumes under the subvolume group.

5.3.5. Listing snapshots of a file system subvolume group

This section provides the steps to list the snapshots of a Ceph File system (CephFS) subvolume group.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.
  • Snapshots of the subvolume group.

Procedure

  1. List the snapshots of a CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME

    Example

    [root@mon ~]# ceph fs subvolumegroup snapshot ls cephfs subgroup0

5.3.6. Removing snapshot of a file system subvolume group

This section provides the step to remove snapshots of a Ceph File system (CephFS) subvolume group.

Note

Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A Ceph File System volume.
  • A snapshot of the subvolume group.

Procedure

  1. Remove the snapshot of the CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]

    Example

    [root@mon ~]# ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force

5.3.7. Removing a file system subvolume group

This section shows how to remove the Ceph File system (CephFS) subvolume group.

Note

The removal of a subvolume group fails if it is not empty or non-existent. The --force flag allows the non-existent subvolume group to be removed.

Prerequisites

  • A working Red Hat Ceph Storage cluster with Ceph File System deployed.
  • At least read access on the Ceph Monitor.
  • Read and write capability on the Ceph Manager nodes.
  • A CephFS subvolume group.

Procedure

  1. Remove the CephFS subvolume group:

    Syntax

    ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]

    Example

    [root@mon ~]# ceph fs subvolumegroup rm cephfs subgroup0 --force

5.4. Additional Resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.