검색

이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 4. Administering Ceph File Systems

download PDF

This chapter describes common Ceph File System administrative tasks.

4.1. Prerequisites

4.2. Mapping Directory Trees to MDS Ranks

This section describes how to map a directory and its subdirectories to a particular active Metadata Server (MDS) rank so that its metadata is only managed by the MDS daemon holding that rank. This approach enables you to evenly spread application load or limit impact of users' metadata requests to the entire cluster.

Important

Note that an internal balancer already dynamically spreads the application load. Therefore, map directory trees to ranks only for certain carefully chosen applications. In addition, when a directory is mapped to a rank, the balancer cannot split it. Consequently, a large number of operations within the mapped directory can overload the rank and the MDS daemon that manages it.

Prerequisites

Procedure

  • Set the ceph.dir.pin extended attribute on a directory.

    setfattr -n ceph.dir.pin -v <rank> <directory>

    For example, to assign the /home/ceph-user/ directory all of its subdirectories to rank 2:

    [user@client ~]$ setfattr -n ceph.dir.pin -v 2 /home/ceph-user

Additional Resources

4.3. Disassociating Directory Trees from MDS Ranks

This section describes how to disassociate a directory from a particular active Metadata Server (MDS) rank.

Prerequisites

  • Ensure that the attr package is installed on the client node with mounted Ceph File System.

Procedure

  • Set the ceph.dir.pin extended attribute to -1 on a directory.

    setfattr -n ceph.dir.pin -v -1 <directory>

    For example, to disassociate the /home/ceph-user/ directory from a MDS rank:

    [user@client ~]$ serfattr -n ceph.dir.pin -v -1 /home/ceph-user

    Note that any separately mapped subdirectories of /home/ceph-user/ are not affected.

Additional Resources

4.4. Working with File and Directory Layouts

This section describes how to:

4.4.1. Prerequisites

  • Make sure that the attr package is installed.

4.4.2. Understanding File and Directory Layouts

This section explains what file and directory layouts are in the context for the Ceph File System.

A layout of a file or directory controls how its content is mapped to Ceph RADOS objects. The directory layouts serves primarily for setting an inherited layout for new files in that directory. See Layouts Inheritance for more details.

To view and set a file or directory layout, use virtual extended attributes or extended file attributes (xattrs). The name of the layout attributes depends on whether a file is a regular file or a directory:

  • Regular files layout attributes are called ceph.file.layout
  • Directories layout attributes are called ceph.dir.layout

The File and Directory Layout Fields table lists available layout fields that you can set on files and directories.

Table 4.1. File and Directory Layout Fields
FieldDescriptionType

pool

ID or name of the pool to store file’s data objects. Note that the pool must part of the set of data pools of the Ceph file system. See Section 4.5, “Adding Data Pools” for details.

string

pool_namespace

Namespace to write objects to. Empty by default, that means the default namespace is used.

string

stripe_unit

The size in bytes of a block of data used in the RAID 0 distribution of a file. All stripe units for a file have equal size. The last stripe unit is typically incomplete. That means it represents the data at the end of the file as well as unused space beyond it up to the end of the fixed stripe unit size.

integer

stripe_count

The number of consecutive stripe units that constitute a RAID 0 “stripe” of file data.

integer

object_size

Size of RADOS objects in bytes in which file data are chunked.

integer

Layouts Inheritance

Files inherit the layout of their parent directory when you create them. However, subsequent changes to the parent directory layout do not affect children. If a directory does not have any layouts set, files inherit the layout from the closest directory with layout in the directory structure.

4.4.3. Setting File and Directory Layouts

Use the setfattr command to set layout fields on a file or directory.

Important

When you modify the layout fields of a file, the file must be empty, otherwise an error occurs.

Procedure
  • To modify layout fields on a file or directory:

    setfattr -n ceph.<type>.layout.<field> -v <value> <path>

    Replace:

    • <type> with file or dir
    • <field> with the name of the field, see the File and Directory Layouts Fields table for details.
    • <value> with the new value of the field
    • <path> with the path to the file or directory

    For example, to set the stripe_unit field to 1048576 on the test file:

    $ setfattr -n ceph.file.layout.stripe_unit -v 1048576 test
Additional Resources
  • The setfattr(1) manual page

4.4.4. Viewing File and Directory Layouts

This section describes how to use the getfattr command to view layout fields on a file or directory.

Procedure
  • To view layout fields on a file or directory as a single string:

    getfattr -n ceph.<type>.layout <path>

    Replace * <path> with the path to the file or directory * <type> with file or dir

    For example, to view file layouts on the /home/test/ file:

    $ getfattr -n ceph.dir.layout /home/test
    ceph.dir.layout="stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data"
    Note

    Directories do not have an explicit layout until you set it (see Section 4.4.3, “Setting File and Directory Layouts”). Consequently, an attempt to view the layout fails if you never modified the layout.

  • To view individual layout fields on a file or directory:

    getfattr -n ceph.<type>.layout.<field> <path>

    Replace:

    • <type> with file or dir
    • <field> with the name of the field, see the File and Directory Layouts Fields table for details.
    • <path> with the path to the file or directory

    For example, to view the pool field of the test file:

    $ getfattr -n ceph.file.layout.pool test
    ceph.file.layout.pool="cephfs_data"
    Note

    When viewing the pool field, the pool is usually indicated by its name. However, when you just created the pool, it can be indicated by its ID.

Additional Resources
  • The getfattr(1) manual page

4.4.5. Removing Directory Layouts

This section describes how to use the setfattr command to remove layouts from a directory.

Note

When you set a file layout, you cannot change or remove it.

Procedure
  • To remove a layout from a directory:

    setfattr -x ceph.dir.layout <path>

    Replace:

    • <path> with the path to the directory

    For example:

    $ setfattr -x ceph.dir.layout /home/cephfs
  • To remove the pool_namespace field:

    $ setfattr -x ceph.dir.layout.pool_namespace <directory>

    Replace:

    • <path> with the path to the directory

    For example:

    $ setfattr -x ceph.dir.layout.pool_namespace /home/cephfs
    Note

    The pool_namespace field is the only field you can remove separately.

Additional Resources
  • The setfattr(1) manual page

4.5. Adding Data Pools

The Ceph File System (CephFS) supports adding more than one pool to be used for storing data. This can be useful for:

  • Storing log data on reduced redundancy pools
  • Storing user home directories on an SSD or NVMe pool
  • Basic data segregation.

Before using another data pool in the Ceph File System, you must add it as described in this section.

By default, for storing file data, CephFS uses the initial data pool that was specified during its creation. To use a secondary data pool, you must also configure a part of the file system hierarchy to store file data in that pool (and optionally, within a namespace of that pool) using file and directory layouts. See Section 4.4, “Working with File and Directory Layouts” for details.

Procedure

Use the following commands from a Monitor host and as the root user.

  1. Create a new data pool.

    ceph osd pool create <name> <pg_num>

    Replace:

    • <name> with the name of the pool
    • <pg_num> with the number of placement groups (PGs)

    For example:

    [root@monitor]# ceph osd pool create cephfs_data_ssd 64
    pool 'cephfs_data_ssd' created
  2. Add the newly created pool under the control of the Metadata Servers.

    ceph mds add_data_pool <name>

    Replace:

    • <name> with the name of the pool

    For example:

    [root@monitor]# ceph mds add_data_pool cephfs_data_ssd
    added data pool 6 to fsmap
  3. Verify that the pool was successfully added:

    [root@monitor]# ceph fs ls
    name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]
  4. If you use the cephx authentication, make sure that clients can access the new pool. See Section 3.3, “Creating Ceph File System Client Users” for details.

4.6. Working with Ceph File System quotas

As a storage administrator, you can view, set, and remove quotas on any directory in the file system. You can place quota restrictions on the number of bytes or the number of files within the directory.

4.6.1. Prerequisites

  • Make sure that the attr package is installed.

4.6.2. Ceph File System quotas

This section describes the properties of quotas and their limitations in CephFS.

Understanding quota limitations

  • CephFS quotas rely on the cooperation of the client mounting the file system to stop writing data when it reaches the configured limit. However, quotas alone cannot prevent an adversarial, untrusted client from filling the file system.
  • Once processes that write data to the file system reach the configured limit, a short period of time elapses between when the amount of data reaches the quota limit, and when the processes stop writing data. The time period generally measures in the tenths of seconds. However, processes continue to write data during that time. The amount of additional data that the processes write depends on the amount of time elapsed before they stop.
  • Linux kernel clients version 4.17 and higher use the userspace client, libcephfs and ceph-fuse to support CephFS quotas. However, those kernel clients only support quotas on mimic+ clusters. Kernel clients, even recent versions, cannot manage quotas on older storage clusters, even if they can set the quotas’ extended attributes.
  • When using path-based access restrictions, be sure to configure the quota on the directory to which the client is restricted, or to a directory nested beneath it. If the client has restricted access to a specific path based on the MDS capability, and the quota is configured on an ancestor directory that the client cannot access, the client will not enforce the quota. For example, if the client cannot access the /home/ directory and the quota is configured on /home/, the client cannot enforce that quota on the directory /home/user/.
  • Snapshot file data that has been deleted or changed does not count towards the quota. See also: http://tracker.ceph.com/issues/24284

4.6.3. Viewing quotas

This section describes how to use the getfattr command and the ceph.quota extended attributes to view the quota settings for a directory.

Note

If the attributes appear on a directory inode, then that directory has a configured quota. If the attributes do not appear on the inode, then the directory does not have a quota set, although its parent directory might have a quota configured. If the value of the extended attribute is 0, the quota is not set.

Prerequisites

  • Make sure that the attr package is installed.

Procedure

  1. To view CephFS quotas.

    1. Using a byte-limit quota:

      Syntax

      getfattr -n ceph.quota.max_bytes DIRECTORY

      Example

      [root@fs ~]# getfattr -n ceph.quota.max_bytes /cephfs/

    2. Using a file-limit quota:

      Syntax

      getfattr -n ceph.quota.max_files DIRECTORY

      Example

      [root@fs ~]# getfattr -n ceph.quota.max_files /cephfs/

Additional Resources

  • See the getfattr(1) manual page for more information.

4.6.4. Setting quotas

This section describes how to use the setfattr command and the ceph.quota extended attributes to set the quota for a directory.

Prerequisites

  • Make sure that the attr package is installed.

Procedure

  1. To set CephFS quotas.

    1. Using a byte-limit quota:

      Syntax

      setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_bytes -v 100000000 /cephfs/

      In this example, 100000000 bytes equals 100 MB.

    2. Using a file-limit quota:

      Syntax

      setfattr -n ceph.quota.max_files -v 10000 /some/dir

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_files -v 10000 /cephfs/

      In this example, 10000 equals 10,000 files.

Additional Resources

  • See the setfattr(1) manual page for more information.

4.6.5. Removing quotas

This section describes how to use the setfattr command and the ceph.quota extended attributes to remove a quota from a directory.

Prerequisites

  • Make sure that the attr package is installed.

Procedure

  1. To remove CephFS quotas.

    1. Using a byte-limit quota:

      Syntax

      setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_bytes -v 0 /cephfs/

    2. Using a file-limit quota:

      Syntax

      setfattr -n ceph.quota.max_files -v 0 DIRECTORY

      Example

      [root@fs ~]# setfattr -n ceph.quota.max_files -v 0 /cephfs/

Additional Resources

  • See the setfattr(1) manual page for more information.

4.6.6. Additional Resources

  • See the getfattr(1) manual page for more information.
  • See the setfattr(1) manual page for more information.

4.7. Removing Ceph File Systems

As a storage administrator, you can remove a Ceph File System (CephFS). Before doing so, consider backing up all the data and verifying that all clients have unmounted the file system locally.

Warning

This operation is destructive and will make the data stored on the Ceph File System permanently inaccessible.

Prerequisites

  • Back up your data.
  • Access as the root user to a Ceph Monitor node.

Procedure

  1. Mark the cluster down.

    ceph fs set name cluster_down true

    Replace:

    • name with the name of the Ceph File System you want to remove

    For example:

    [root@monitor]# ceph fs set cephfs cluster_down true
    marked down
  2. Display the status of the Ceph File System.

    ceph fs status

    For example:

    [root@monitor]# ceph fs status
    cephfs - 0 clients
    ======
    +------+--------+-------+---------------+-------+-------+
    | Rank | State  |  MDS  |    Activity   |  dns  |  inos |
    +------+--------+-------+---------------+-------+-------+
    |  0   | active | ceph4 | Reqs:    0 /s |   10  |   12  |
    +------+--------+-------+---------------+-------+-------+
    +-----------------+----------+-------+-------+
    |       Pool      |   type   |  used | avail |
    +-----------------+----------+-------+-------+
    | cephfs_metadata | metadata | 2246  |  975G |
    |   cephfs_data   |   data   |    0  |  975G |
    +-----------------+----------+-------+-------+
  3. Fail all MDS ranks shown in the status.

    # ceph mds fail rank

    Replace:

    • rank with the rank of the MDS daemons to fail

    For example:

    [root@monitor]# ceph mds fail 0
  4. Remove the Ceph File System.

    ceph fs rm name --yes-i-really-mean-it

    Replace:

    • name with the name of the Ceph File System you want to remove

    For example:

    [root@monitor]# ceph fs rm cephfs --yes-i-really-mean-it
  5. Verify that the file system has been successfully removed.

    [root@monitor]# ceph fs ls
  6. Optional. Remove data and metadata pools associated with the removed file system. See the Delete a pool section in the Red Hat Ceph Storage 3 Storage Strategies Guide.
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.