此内容没有您所选择的语言版本。
Chapter 4. Administering Ceph File Systems
This chapter describes common Ceph File System administrative tasks.
- To map a directory to a particular MDS rank, see Section 4.2, “Mapping Directory Trees to MDS Ranks”.
- To disassociate a directory from a MDS rank, see Section 4.3, “Disassociating Directory Trees from MDS Ranks”.
- To work with files and directory layouts, see Section 4.4, “Working with File and Directory Layouts”.
- To add a new data pool, see Section 4.5, “Adding Data Pools”.
- To work with quotas, see Section 4.6, “Working with Ceph File System quotas”.
- To remove a Ceph File System, see Section 4.7, “Removing Ceph File Systems”.
4.1. Prerequisites
- Deploy a Ceph Storage Cluster if you do not have one. For details, see the Installation Guide for Red Hat Enterprise Linux or Ubuntu.
-
Install and configure Ceph Metadata Server daemons (
ceph-mds
). For details, see the Installation Guide for Red Hat Enterprise Linux or Ubuntu Chapter 2, Configuring Metadata Server Daemons. - Create and mount the Ceph File System. For details, see Chapter 3, Deploying Ceph File Systems.
4.2. Mapping Directory Trees to MDS Ranks
This section describes how to map a directory and its subdirectories to a particular active Metadata Server (MDS) rank so that its metadata is only managed by the MDS daemon holding that rank. This approach enables you to evenly spread application load or limit impact of users' metadata requests to the entire cluster.
Note that an internal balancer already dynamically spreads the application load. Therefore, map directory trees to ranks only for certain carefully chosen applications. In addition, when a directory is mapped to a rank, the balancer cannot split it. Consequently, a large number of operations within the mapped directory can overload the rank and the MDS daemon that manages it.
Prerequisites
- Configure multiple active MDS daemons. See Section 2.6, “Configuring Multiple Active Metadata Server Daemons” for details.
-
Ensure that the
attr
package is installed on the client node with mounted Ceph File System.
Procedure
Set the
ceph.dir.pin
extended attribute on a directory.setfattr -n ceph.dir.pin -v <rank> <directory>
For example, to assign the
/home/ceph-user/
directory all of its subdirectories to rank 2:[user@client ~]$ setfattr -n ceph.dir.pin -v 2 /home/ceph-user
Additional Resources
4.3. Disassociating Directory Trees from MDS Ranks
This section describes how to disassociate a directory from a particular active Metadata Server (MDS) rank.
Prerequisites
-
Ensure that the
attr
package is installed on the client node with mounted Ceph File System.
Procedure
Set the
ceph.dir.pin
extended attribute to -1 on a directory.setfattr -n ceph.dir.pin -v -1 <directory>
For example, to disassociate the
/home/ceph-user/
directory from a MDS rank:[user@client ~]$ serfattr -n ceph.dir.pin -v -1 /home/ceph-user
Note that any separately mapped subdirectories of
/home/ceph-user/
are not affected.
Additional Resources
4.4. Working with File and Directory Layouts
This section describes how to:
4.4.1. Prerequisites
-
Make sure that the
attr
package is installed.
4.4.2. Understanding File and Directory Layouts
This section explains what file and directory layouts are in the context for the Ceph File System.
A layout of a file or directory controls how its content is mapped to Ceph RADOS objects. The directory layouts serves primarily for setting an inherited layout for new files in that directory. See Layouts Inheritance for more details.
To view and set a file or directory layout, use virtual extended attributes or extended file attributes (xattrs
). The name of the layout attributes depends on whether a file is a regular file or a directory:
-
Regular files layout attributes are called
ceph.file.layout
-
Directories layout attributes are called
ceph.dir.layout
The File and Directory Layout Fields table lists available layout fields that you can set on files and directories.
Field | Description | Type |
---|---|---|
| ID or name of the pool to store file’s data objects. Note that the pool must part of the set of data pools of the Ceph file system. See Section 4.5, “Adding Data Pools” for details. | string |
| Namespace to write objects to. Empty by default, that means the default namespace is used. | string |
| The size in bytes of a block of data used in the RAID 0 distribution of a file. All stripe units for a file have equal size. The last stripe unit is typically incomplete. That means it represents the data at the end of the file as well as unused space beyond it up to the end of the fixed stripe unit size. | integer |
| The number of consecutive stripe units that constitute a RAID 0 “stripe” of file data. | integer |
| Size of RADOS objects in bytes in which file data are chunked. | integer |
Layouts Inheritance
Files inherit the layout of their parent directory when you create them. However, subsequent changes to the parent directory layout do not affect children. If a directory does not have any layouts set, files inherit the layout from the closest directory with layout in the directory structure.
4.4.3. Setting File and Directory Layouts
Use the setfattr
command to set layout fields on a file or directory.
When you modify the layout fields of a file, the file must be empty, otherwise an error occurs.
Procedure
To modify layout fields on a file or directory:
setfattr -n ceph.<type>.layout.<field> -v <value> <path>
Replace:
-
<type>
withfile
ordir
-
<field>
with the name of the field, see the File and Directory Layouts Fields table for details. -
<value>
with the new value of the field -
<path>
with the path to the file or directory
For example, to set the
stripe_unit
field to1048576
on thetest
file:$ setfattr -n ceph.file.layout.stripe_unit -v 1048576 test
-
Additional Resources
-
The
setfattr(1)
manual page
4.4.4. Viewing File and Directory Layouts
This section describes how to use the getfattr
command to view layout fields on a file or directory.
Procedure
To view layout fields on a file or directory as a single string:
getfattr -n ceph.<type>.layout <path>
Replace *
<path>
with the path to the file or directory *<type>
withfile
ordir
For example, to view file layouts on the
/home/test/
file:$ getfattr -n ceph.dir.layout /home/test ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
NoteDirectories do not have an explicit layout until you set it (see Section 4.4.3, “Setting File and Directory Layouts”). Consequently, an attempt to view the layout fails if you never modified the layout.
To view individual layout fields on a file or directory:
getfattr -n ceph.<type>.layout.<field> <path>
Replace:
-
<type>
withfile
ordir
-
<field>
with the name of the field, see the File and Directory Layouts Fields table for details. -
<path>
with the path to the file or directory
For example, to view the
pool
field of thetest
file:$ getfattr -n ceph.file.layout.pool test ceph.file.layout.pool="cephfs_data"
NoteWhen viewing the
pool
field, the pool is usually indicated by its name. However, when you just created the pool, it can be indicated by its ID.-
Additional Resources
-
The
getfattr(1)
manual page
4.4.5. Removing Directory Layouts
This section describes how to use the setfattr
command to remove layouts from a directory.
When you set a file layout, you cannot change or remove it.
Procedure
To remove a layout from a directory:
setfattr -x ceph.dir.layout <path>
Replace:
-
<path>
with the path to the directory
For example:
$ setfattr -x ceph.dir.layout /home/cephfs
-
To remove the
pool_namespace
field:$ setfattr -x ceph.dir.layout.pool_namespace <directory>
Replace:
-
<path>
with the path to the directory
For example:
$ setfattr -x ceph.dir.layout.pool_namespace /home/cephfs
NoteThe
pool_namespace
field is the only field you can remove separately.-
Additional Resources
-
The
setfattr(1)
manual page
4.5. Adding Data Pools
The Ceph File System (CephFS) supports adding more than one pool to be used for storing data. This can be useful for:
- Storing log data on reduced redundancy pools
- Storing user home directories on an SSD or NVMe pool
- Basic data segregation.
Before using another data pool in the Ceph File System, you must add it as described in this section.
By default, for storing file data, CephFS uses the initial data pool that was specified during its creation. To use a secondary data pool, you must also configure a part of the file system hierarchy to store file data in that pool (and optionally, within a namespace of that pool) using file and directory layouts. See Section 4.4, “Working with File and Directory Layouts” for details.
Procedure
Use the following commands from a Monitor host and as the root
user.
Create a new data pool.
ceph osd pool create <name> <pg_num>
Replace:
-
<name>
with the name of the pool -
<pg_num>
with the number of placement groups (PGs)
For example:
[root@monitor]# ceph osd pool create cephfs_data_ssd 64 pool 'cephfs_data_ssd' created
-
Add the newly created pool under the control of the Metadata Servers.
ceph mds add_data_pool <name>
Replace:
-
<name>
with the name of the pool
For example:
[root@monitor]# ceph mds add_data_pool cephfs_data_ssd added data pool 6 to fsmap
-
Verify that the pool was successfully added:
[root@monitor]# ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]
-
If you use the
cephx
authentication, make sure that clients can access the new pool. See Section 3.3, “Creating Ceph File System Client Users” for details.
4.6. Working with Ceph File System quotas
As a storage administrator, you can view, set, and remove quotas on any directory in the file system. You can place quota restrictions on the number of bytes or the number of files within the directory.
4.6.1. Prerequisites
-
Make sure that the
attr
package is installed.
4.6.2. Ceph File System quotas
This section describes the properties of quotas and their limitations in CephFS.
Understanding quota limitations
- CephFS quotas rely on the cooperation of the client mounting the file system to stop writing data when it reaches the configured limit. However, quotas alone cannot prevent an adversarial, untrusted client from filling the file system.
- Once processes that write data to the file system reach the configured limit, a short period of time elapses between when the amount of data reaches the quota limit, and when the processes stop writing data. The time period generally measures in the tenths of seconds. However, processes continue to write data during that time. The amount of additional data that the processes write depends on the amount of time elapsed before they stop.
- Linux kernel clients version 4.17 and higher use the userspace client, libcephfs and ceph-fuse to support CephFS quotas. However, those kernel clients only support quotas on mimic+ clusters. Kernel clients, even recent versions, cannot manage quotas on older storage clusters, even if they can set the quotas’ extended attributes.
-
When using path-based access restrictions, be sure to configure the quota on the directory to which the client is restricted, or to a directory nested beneath it. If the client has restricted access to a specific path based on the MDS capability, and the quota is configured on an ancestor directory that the client cannot access, the client will not enforce the quota. For example, if the client cannot access the
/home/
directory and the quota is configured on/home/
, the client cannot enforce that quota on the directory/home/user/
. - Snapshot file data that has been deleted or changed does not count towards the quota. See also: http://tracker.ceph.com/issues/24284
4.6.3. Viewing quotas
This section describes how to use the getfattr
command and the ceph.quota
extended attributes to view the quota settings for a directory.
If the attributes appear on a directory inode, then that directory has a configured quota. If the attributes do not appear on the inode, then the directory does not have a quota set, although its parent directory might have a quota configured. If the value of the extended attribute is 0, the quota is not set.
Prerequisites
-
Make sure that the
attr
package is installed.
Procedure
To view CephFS quotas.
Using a byte-limit quota:
Syntax
getfattr -n ceph.quota.max_bytes DIRECTORY
Example
[root@fs ~]# getfattr -n ceph.quota.max_bytes /cephfs/
Using a file-limit quota:
Syntax
getfattr -n ceph.quota.max_files DIRECTORY
Example
[root@fs ~]# getfattr -n ceph.quota.max_files /cephfs/
Additional Resources
-
See the
getfattr(1)
manual page for more information.
4.6.4. Setting quotas
This section describes how to use the setfattr
command and the ceph.quota
extended attributes to set the quota for a directory.
Prerequisites
-
Make sure that the
attr
package is installed.
Procedure
To set CephFS quotas.
Using a byte-limit quota:
Syntax
setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
Example
[root@fs ~]# setfattr -n ceph.quota.max_bytes -v 100000000 /cephfs/
In this example, 100000000 bytes equals 100 MB.
Using a file-limit quota:
Syntax
setfattr -n ceph.quota.max_files -v 10000 /some/dir
Example
[root@fs ~]# setfattr -n ceph.quota.max_files -v 10000 /cephfs/
In this example, 10000 equals 10,000 files.
Additional Resources
-
See the
setfattr(1)
manual page for more information.
4.6.5. Removing quotas
This section describes how to use the setfattr
command and the ceph.quota
extended attributes to remove a quota from a directory.
Prerequisites
-
Make sure that the
attr
package is installed.
Procedure
To remove CephFS quotas.
Using a byte-limit quota:
Syntax
setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY
Example
[root@fs ~]# setfattr -n ceph.quota.max_bytes -v 0 /cephfs/
Using a file-limit quota:
Syntax
setfattr -n ceph.quota.max_files -v 0 DIRECTORY
Example
[root@fs ~]# setfattr -n ceph.quota.max_files -v 0 /cephfs/
Additional Resources
-
See the
setfattr(1)
manual page for more information.
4.6.6. Additional Resources
-
See the
getfattr(1)
manual page for more information. -
See the
setfattr(1)
manual page for more information.
4.7. Removing Ceph File Systems
As a storage administrator, you can remove a Ceph File System (CephFS). Before doing so, consider backing up all the data and verifying that all clients have unmounted the file system locally.
This operation is destructive and will make the data stored on the Ceph File System permanently inaccessible.
Prerequisites
- Back up your data.
-
Access as the
root
user to a Ceph Monitor node.
Procedure
Mark the cluster down.
ceph fs set name cluster_down true
Replace:
- name with the name of the Ceph File System you want to remove
For example:
[root@monitor]# ceph fs set cephfs cluster_down true marked down
Display the status of the Ceph File System.
ceph fs status
For example:
[root@monitor]# ceph fs status cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+ | Rank | State | MDS | Activity | dns | inos | +------+--------+-------+---------------+-------+-------+ | 0 | active | ceph4 | Reqs: 0 /s | 10 | 12 | +------+--------+-------+---------------+-------+-------+ +-----------------+----------+-------+-------+ | Pool | type | used | avail | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 2246 | 975G | | cephfs_data | data | 0 | 975G | +-----------------+----------+-------+-------+
Fail all MDS ranks shown in the status.
# ceph mds fail rank
Replace:
- rank with the rank of the MDS daemons to fail
For example:
[root@monitor]# ceph mds fail 0
Remove the Ceph File System.
ceph fs rm name --yes-i-really-mean-it
Replace:
- name with the name of the Ceph File System you want to remove
For example:
[root@monitor]# ceph fs rm cephfs --yes-i-really-mean-it
Verify that the file system has been successfully removed.
[root@monitor]# ceph fs ls
- Optional. Remove data and metadata pools associated with the removed file system. See the Delete a pool section in the Red Hat Ceph Storage 3 Storage Strategies Guide.