Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes
As a storage administrator, you can use Red Hat’s Ceph Container Storage Interface (CSI) to manage Ceph File System (CephFS) exports. This also allows you to use other services, such as OpenStack’s file system service (Manila) by having a common command-line interface to interact with. The volumes
module for the Ceph Manager daemon (ceph-mgr
) implements the ability to export Ceph File Systems (CephFS).
The Ceph Manager volumes module implements the following file system export abstractions:
- CephFS volumes
- CephFS subvolume groups
- CephFS subvolumes
This chapter describes how to work with:
4.1. Ceph File System volumes
As a storage administrator, you can create, list, and remove Ceph File System (CephFS) volumes. CephFS volumes are an abstraction for Ceph File Systems.
This section describes how to:
4.1.1. Creating a file system volume
Ceph Manager’s orchestrator module creates a Metadata Server (MDS) for the Ceph File System (CephFS). This section describes how to create a CephFS volume.
This creates the Ceph File System, along with the data and metadata pools.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
Procedure
Create a CephFS volume:
Syntax
ceph fs volume create VOLUME_NAME
Example
[root@mon ~]# ceph fs volume create cephfs
4.1.2. Listing file system volume
This section describes the step to list the Ceph File system (CephFS) volumes.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS volume.
Procedure
List the CephFS volume:
Example
[root@mon ~]# ceph fs volume ls
4.1.3. Removing a file system volume
Ceph Manager’s orchestrator module removes the Metadata Server (MDS) for the Ceph File System (CephFS). This section shows how to remove the Ceph File System (CephFS) volume.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS volume.
Procedure
If the
mon_allow_pool_delete
option is not set totrue
, then set it totrue
before removing the CephFS volume:Example
[root@mon ~]# ceph config set mon mon_allow_pool_delete true
Remove the CephFS volume:
Syntax
ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]
Example
[root@mon ~]# ceph fs volume rm cephfs --yes-i-really-mean-it
4.2. Ceph File System subvolume groups
As a storage administrator, you can create, list, fetch absolute path, and remove Ceph File System (CephFS) subvolume groups. CephFS subvolume groups are abstractions at a directory level which effects policies, for example, file layouts, across a set of subvolumes.
Starting with Red Hat Ceph Storage 5.0, the subvolume group snapshot feature is not supported. You can only list and remove the existing snapshots of these subvolume groups.
This section describes how to:
- Create a file system subvolume group.
- List file system subvolume groups.
- Fetch absolute path of a file system subvolume group.
- Create snapshot of a file system subvolume group.
- List snapshots of a file system subvolume group.
- Remove snapshot of a file system subvolume group.
- Remove a file system subvolume group.
4.2.1. Creating a file system subvolume group
This section describes how to create a Ceph File System (CephFS) subvolume group.
When creating a subvolume group, you can specify its data pool layout, uid, gid, and file mode in octal numerals. By default, the subvolume group is created with an octal file mode ‘755’, uid ‘0’, gid ‘0’, and data pool layout of its parent directory.
Prerequisites
- A working Red Hat Ceph Storage cluster with a Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
Procedure
Create a CephFS subvolume group:
Syntax
ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE]
Example
[root@mon ~]# ceph fs subvolumegroup create cephfs subgroup0
The command succeeds even if the subvolume group already exists.
4.2.2. Listing file system subvolume groups
This section describes the step to list the Ceph File System (CephFS) subvolume groups.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume group.
Procedure
List the CephFS subvolume groups:
Syntax
ceph fs subvolumegroup ls VOLUME_NAME
Example
[root@mon ~]# ceph fs subvolumegroup ls cephfs
4.2.3. Fetching absolute path of a file system subvolume group
This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume group.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume group.
Procedure
Fetch the absolute path of the CephFS subvolume group:
Syntax
ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME
Example
[root@mon ~]# ceph fs subvolumegroup getpath cephfs subgroup0
4.2.4. Creating snapshot of a file system subvolume group
This section shows how to create snapshots of Ceph File system (CephFS) subvolume group.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- CephFS subvolume group.
-
In addition to read (
r
) and write (w
) capabilities, clients also requires
flag on a directory path within the file system.
Procedure
Verify that the
s
flag is set on the directory:Syntax
ceph auth get CLIENT_NAME
Example
client.0 key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/bar 1 caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a 2
Create a snapshot of the CephFS subvolume group:
Syntax
ceph fs subvolumegroup snapshot create VOLUME_NAME GROUP_NAME SNAP_NAME
Example
[root@mon ~]# ceph fs subvolumegroup snapshot create cephfs subgroup0 snap0
The command implicitly snapshots all the subvolumes under the subvolume group.
4.2.5. Listing snapshots of a file system subvolume group
This section provides the steps to list the snapshots of a Ceph File System (CephFS) subvolume group.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume group.
- Snapshots of the subvolume group.
Procedure
List the snapshots of a CephFS subvolume group:
Syntax
ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME
Example
[root@mon ~]# ceph fs subvolumegroup snapshot ls cephfs subgroup0
4.2.6. Removing snapshot of a file system subvolume group
This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group.
Using the --force
flag allows the command to succeed that would otherwise fail if the snapshot did not exist.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A Ceph File System volume.
- A snapshot of the subvolume group.
Procedure
Remove the snapshot of the CephFS subvolume group:
Syntax
ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]
Example
[root@mon ~]# ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force
4.2.7. Removing a file system subvolume group
This section shows how to remove the Ceph File System (CephFS) subvolume group.
The removal of a subvolume group fails if it is not empty or non-existent. The --force
flag allows the non-existent subvolume group to be removed.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume group.
Procedure
Remove the CephFS subvolume group:
Syntax
ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]
Example
[root@mon ~]# ceph fs subvolumegroup rm cephfs subgroup0 --force
4.3. Ceph File System subvolumes
As a storage administrator, you can create, list, fetch absolute path, fetch metadata, and remove Ceph File System (CephFS) subvolumes. Additionally, you can also create, list, and remove snapshots of these subvolumes. CephFS subvolumes are an abstraction for independent Ceph File Systems directory trees.
This section describes how to:
- Create a file system subvolume.
- List file system subvolume.
- Resizing a file system subvolume.
- Fetch absolute path of a file system subvolume.
- Fetch metadata of a file system subvolume.
- Create snapshot of a file system subvolume.
- Cloning subvolumes from snapshots.
- List snapshots of a file system subvolume.
- Fetching metadata of the snapshots of a file system subvolume.
- Remove a file system subvolume.
- Remove snapshot of a file system subvolume.
4.3.1. Creating a file system subvolume
This section describes how to create a Ceph File System (CephFS) subvolume.
When creating a subvolume, you can specify its subvolume group, data pool layout, uid, gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a separate RADOS namespace by specifying the --namespace-isolated
option. By default, a subvolume is created within the default subvolume group, and with an octal file mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit.
Prerequisites
- A working Red Hat Ceph Storage cluster with a Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
Procedure
Create a CephFS subvolume:
Syntax
ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE] [--namespace-isolated]
Example
[root@mon ~]# ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated
The command succeeds even if the subvolume already exists.
4.3.2. Listing file system subvolume
This section describes the step to list the Ceph File System (CephFS) subvolume.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
Procedure
List the CephFS subvolume:
Syntax
ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[root@mon ~]# ceph fs subvolume ls cephfs --group_name subgroup0
4.3.3. Resizing a file system subvolume
This section describes the step to resize the Ceph File System (CephFS) subvolume.
The ceph fs subvolume resize
command resizes the subvolume quota using the size specified by new_size
. The --no_shrink
flag prevents the subvolume from shrinking below the currently used size of the subvolume. The subvolume can be resized to an infinite size by passing inf
or infinite
as the new_size
.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
Procedure
Resize a CephFS subvolume:
Syntax
ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME] [--no_shrink]
Example
[root@mon ~]# ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink
4.3.4. Fetching absolute path of a file system subvolume
This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
Procedure
Fetch the absolute path of the CephFS subvolume:
Syntax
ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[root@mon ~]# ceph fs subvolume getpath cephfs sub0 --group_name subgroup0
4.3.5. Fetching metadata of a file system subvolume
This section shows how to fetch metadata of a Ceph File System (CephFS) subvolume.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
Procedure
Fetch the metadata of a CephFS subvolume:
Syntax
ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[root@mon ~]# ceph fs subvolume info cephfs sub0 --group_name subgroup0
Example output
# ceph fs subvolume info cephfs sub0 { "atime": "2023-07-14 08:52:46", "bytes_pcent": "0.00", "bytes_quota": 1024000000, "bytes_used": 0, "created_at": "2023-07-14 08:52:46", "ctime": "2023-07-14 08:53:54", "data_pool": "cephfs.cephfs.data", "features": [ "snapshot-clone", "snapshot-autoprotect", "snapshot-retention" ], "flavor": "2", "gid": 0, "mode": 16877, "mon_addrs": [ "10.0.208.172:6789", "10.0.211.197:6789", "10.0.209.212:6789" ], "mtime": "2023-07-14 08:52:46", "path": "/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3", "pool_namespace": "", "state": "complete", "type": "subvolume", "uid": 0 }
The output format is JSON and contains the following fields:
- atime: access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
- bytes_pcent: quota used in percentage if quota is set, else displays "undefined".
- bytes_quota: quota size in bytes if quota is set, else displays "infinite".
- bytes_used: current used size of the subvolume in bytes.
- created_at: time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS".
- ctime: change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
- data_pool: data pool the subvolume belongs to.
- features: features supported by the subvolume, such as , "snapshot-clone", "snapshot-autoprotect", or "snapshot-retention".
-
flavor: subvolume version, either
1
for version one or2
for version two. - gid: group ID of subvolume path.
- mode: mode of subvolume path.
- mon_addrs: list of monitor addresses.
- mtime: modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS".
- path: absolute path of a subvolume.
- pool_namespace: RADOS namespace of the subvolume.
- state: current state of the subvolume, such as, "complete" or "snapshot-retained".
- type: subvolume type indicating whether it is a clone or subvolume.
- uid: user ID of subvolume path.
4.3.6. Creating snapshot of a file system subvolume
This section shows how to create snapshots of a Ceph File System (CephFS) subvolume.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
-
In addition to read (
r
) and write (w
) capabilities, clients also requires
flag on a directory path within the file system.
Procedure
Verify that the
s
flag is set on the directory:Syntax
ceph auth get CLIENT_NAME
Example
[root@mon ~]# ceph auth get client.0 [client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = "allow rw, allow rws path=/bar" 1 caps mon = "allow r" caps osd = "allow rw tag cephfs data=cephfs_a" 2
Create a snapshot of the Ceph File System subvolume:
Syntax
ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME]
Example
[root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0
4.3.7. Cloning subvolumes from snapshots
Subvolumes can be created by cloning subvolume snapshots. It is an asynchronous operation involving copying data from a snapshot to a subvolume.
Cloning is inefficient for very large data sets.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
To create or delete snapshots, in addition to read and write capability, clients require
s
flag on a directory path within the filesystem.Syntax
CLIENT_NAME key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = allow rw, allow rws path=DIRECTORY_PATH caps mon = allow r caps osd = allow rw tag cephfs data=DIRECTORY_NAME
Example
[client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = "allow rw, allow rws path=/bar" caps mon = "allow r" caps osd = "allow rw tag cephfs data=cephfs_a"
In the above example,
client.0
can create or delete snapshots in thebar
directory of filesystemcephfs_a
.
Procedure
Create a Ceph File System (CephFS) volume:
Syntax
ceph fs volume create VOLUME_NAME
Example
[root@mon ~]# ceph fs volume create cephfs
This creates the CephFS file system, its data and metadata pools.
Create a subvolume group. By default, the subvolume group is created with an octal file mode '755', and data pool layout of its parent directory.
Syntax
ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE]
Example
[root@mon ~]# ceph fs subvolumegroup create cephfs subgroup0
Create a subvolume. By default, a subvolume is created within the default subvolume group, and with an octal file mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit.
Syntax
ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE]
Example
[root@mon ~]# ceph fs subvolume create cephfs sub0 --group_name subgroup0
Create a snapshot of a subvolume:
Syntax
ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0
Initiate a clone operation:
NoteBy default, cloned subvolumes are created in the default group.
If the source subvolume and the target clone are in the default group, run the following command:
Syntax
ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0
If the source subvolume is in the non-default group, then specify the source subvolume group in the following command:
Syntax
ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0
If the target clone is to a non-default group, then specify the target group in the following command:
Syntax
ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1
Check the status of the clone operation:
Syntax
ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME]
Example
[root@mon ~]# ceph fs clone status cephfs clone0 --group_name subgroup1 { "status": { "state": "complete" } }
Additional Resources
- See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide.
4.3.8. Listing snapshots of a file system subvolume
This section provides the step to list the snapshots of a Ceph File system (CephFS) subvolume.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
- Snapshots of the subvolume.
Procedure
List the snapshots of a CephFS subvolume:
Syntax
ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[root@mon ~]# ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0
4.3.9. Fetching metadata of the snapshots of a file system subvolume
This section provides the step to fetch the metadata of the snapshots of a Ceph File System (CephFS) subvolume.
Prerequisites
- A working Red Hat Ceph Storage cluster with CephFS deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
- Snapshots of the subvolume.
Procedure
Fetch the metadata of the snapshots of a CephFS subvolume:
Syntax
ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[root@mon ~]# ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0
Example output
{ "created_at": "2022-05-09 06:18:47.330682", "data_pool": "cephfs_data", "has_pending_clones": "no", "size": 0 }
The output format is JSON and contains the following fields:
- created_at: time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff".
- data_pool: data pool the snapshot belongs to.
- has_pending_clones: "yes" if snapshot clone is in progress otherwise "no".
- size: snapshot size in bytes.
4.3.10. Removing a file system subvolume
This section describes the step to remove the Ceph File System (CephFS) subvolume.
The ceph fs subvolume rm
command removes the subvolume and its contents in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its contents.
A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-snapshots
option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the subvolume, or cloned to a newer subvolume.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A CephFS subvolume.
Procedure
Remove a CephFS subvolume:
Syntax
ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME] [--force] [--retain-snapshots]
Example
[root@mon ~]# ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots
To recreate a subvolume from a retained snapshot:
Syntax
ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME
- NEW_SUBVOLUME can either be the same subvolume which was deleted earlier or clone it to a new subvolume.
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0
4.3.11. Removing snapshot of a file system subvolume
This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group.
Using the --force
flag allows the command to succeed that would otherwise fail if the snapshot did not exist.
Prerequisites
- A working Red Hat Ceph Storage cluster with Ceph File System deployed.
- At least read access on the Ceph Monitor.
- Read and write capability on the Ceph Manager nodes.
- A Ceph File System volume.
- A snapshot of the subvolume group.
Procedure
Remove the snapshot of the CephFS subvolume:
Syntax
ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]
Example
[root@mon ~]# ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force
4.4. Metadata information on Ceph File System subvolumes
As a storage administrator, you can set, get, list, and remove metadata information of Ceph File System (CephFS) subvolumes.
The custom metadata is for users to store their metadata in subvolumes. Users can store the key-value pairs similar to xattr
in a Ceph File System.
This section describes how to:
4.4.1. Setting custom metadata on the file system subvolume
You can set custom metadata on the file system subvolume as a key-value pair.
If the key_name
already exists then the old value is replaced by the new value.
The KEY_NAME
and VALUE
should be a string of ASCII characters as specified in python’s string.printable
. The KEY_NAME
is case-insensitive and is always stored in lower case.
Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- A Ceph File System (CephFS), CephFS volume, subvolume group, and subvolume created.
Procedure
Set the metadata on the CephFS subvolume:
Syntax
ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME]
Example
[ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0
Optional: Set the custom metadata with a space in the
KEY_NAME
:Example
[ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 "test meta" cluster --group_name subgroup0
This creates another metadata with
KEY_NAME
astest meta
for the VALUEcluster
.Optional: You can also set the same metadata with a different value:
Example
[ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 "test_meta" cluster2 --group_name subgroup0
4.4.2. Getting custom metadata on the file system subvolume
You can get the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- A CephFS volume, subvolume group, and subvolume created.
- A custom metadata created on the CephFS subvolume.
Procedure
Get the metadata on the CephFS subvolume:
Syntax
ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[ceph: root@host01 /]# ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0 cluster
4.4.3. Listing custom metadata on the file system subvolume
You can list the custom metadata associated with the key of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- A CephFS volume, subvolume group, and subvolume created.
- A custom metadata created on the CephFS subvolume.
Procedure
List the metadata on the CephFS subvolume:
Syntax
ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[ceph: root@host01 /]# ceph fs subvolume metadata ls cephfs sub0 { "test_meta": "cluster" }
4.4.4. Removing custom metadata from the file system subvolume
You can remove the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- A CephFS volume, subvolume group, and subvolume created.
- A custom metadata created on the CephFS subvolume.
Procedure
Remove the custom metadata on the CephFS subvolume:
Syntax
ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[ceph: root@host01 /]# ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0
List the metadata:
Example
[ceph: root@host01 /]# ceph fs subvolume metadata ls cephfs sub0 {}
4.5. Additional Resources
- See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide.