검색

이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 3. Deploying Ceph File Systems

download PDF

This chapter describes how to create and mount Ceph File Systems.

To deploy a Ceph File System:

  1. Create a Ceph file system on a Monitor node. See Section 3.2, “Creating the Ceph File Systems” for details.
  2. Create a client user with the correct access rights and permissions and make its key available on the node where the Ceph File System will be mounted. See Section 3.3, “Creating Ceph File System Client Users” for details.
  3. Mount CephFS on a dedicated node. Choose one of the following methods:

3.1. Prerequisites

3.2. Creating the Ceph File Systems

This section describes how to create a Ceph File System on a Monitor node.

Important

By default, you can create only one Ceph File System in the Ceph Storage Cluster. See Section 1.3, “CephFS Limitations” for details.

Prerequisites

Note

To enable the repo and install ceph-common package on the defined client nodes, see Installing the Ceph Client Role in the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu.

Procedure

Use the following commands from a Monitor host and as the root user.

  1. Create two pools, one for storing data and one for storing metadata:

    ceph osd pool create <name> <pg_num>

    Specify the pool name and the number of placement groups (PGs), for example:

    [root@monitor ~]# ceph osd pool create cephfs-data 64
    [root@monitor ~]# ceph osd pool create cephfs-metadata 64

    Typically, the metadata pool can start with a conservative number of PGs as it will generally have far fewer objects than the data pool. It is possible to increase the number of PGs if needed. Recommended metadata pool sizes range from 64 PGs to 512 PGs. Size the data pool proportional to the number and sizes of files you expect in the file system.

    Important

    For the metadata pool, consider using

    • A higher replication level because any data loss to this pool can make the whole file system inaccessible
    • Storage with lower latency such as Solid-state Drive (SSD) disks because this directly affects the observed latency of file system operations on clients
  2. Create the Ceph File System:

    ceph fs new <name> <metadata-pool> <data-pool>

    Specify the name of the Ceph File System, the metadata and data pool, for example:

    [root@monitor ~]# ceph fs new cephfs cephfs-metadata cephfs-data
  3. Verify that one or more MDSs enter to the active state based on you configuration.

    ceph fs status <name>

    Specify the name of the Ceph File System, for example:

    [root@monitor ~]# ceph fs status cephfs
    cephfs - 0 clients
    ======
    +------+--------+-------+---------------+-------+-------+
    | Rank | State  |  MDS  |    Activity   |  dns  |  inos |
    +------+--------+-------+---------------+-------+-------+
    |  0   | active | node1 | Reqs:    0 /s |   10  |   12  |
    +------+--------+-------+---------------+-------+-------+
    +-----------------+----------+-------+-------+
    |       Pool      |   type   |  used | avail |
    +-----------------+----------+-------+-------+
    | cephfs_metadata | metadata | 4638  | 26.7G |
    |   cephfs_data   |   data   |    0  | 26.7G |
    +-----------------+----------+-------+-------+
    
    +-------------+
    | Standby MDS |
    +-------------+
    |    node3    |
    |    node2    |
    +-------------+----

Additional Resources

3.3. Creating Ceph File System Client Users

Red Hat Ceph Storage 3 uses cephx for authentication, which is enabled by default. To use cephx with Ceph File System, create a user with the correct authorization capabilities on a Monitor node and make its key available on the node where the Ceph File System will be mounted.

To make the key available for use with the kernel client, create a secret file on the client node with the key inside it. To make the key available for the File System in User Space (FUSE) client, copy the keyring to the client node.

Procedure

  1. On a Monitor host, create a client user.

    ceph auth get-or-create client.<id> <capabilities>

    Specify the client ID and desired capabilities.

    • To restrict the client to only write to and read from a particular pool in the cluster:

      ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=<pool>'

      For example, to restrict the client to only write to and read from the data pool:

      [root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=data'
    • To prevent the client from modifying the pool that is used for files and directories:

      [root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow r' osd 'allow r pool=<pool>'

      For example, to prevent the client from modifying data pool:

      [root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow r' osd 'allow r pool=data'
      Note

      Do not create capabilities for the metadata pool, as Ceph File System clients do not have access to it.

  2. Verify the created key:

    ceph auth get client.<id>

    For example:

    [root@monitor ~]# ceph auth get client.1
  3. If you plan to use the kernel client, create a secret file using the key retrieved from the previous step.

    On the client node, copy the string after key = into /etc/ceph/ceph.client.<id>.secret:

    For example, if the client ID is 1 add a single line to /etc/ceph/ceph.client.1.secret with the key:

    [root@client ~]# cat /etc/ceph/ceph.client.1.secret
    AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
    Important

    Do not include the space in between key = and the string or else mounting will not work.

  4. If you plan to use the File System in User Space (FUSE) client, copy the keyring to the client.

    1. On the Monitor node, export the keyring to a file:

      # ceph auth get client.<id> -o ceph.client.<id>.keyring

      For example, if the client ID is 1:

      [root@monitor ~]# ceph auth get client.1 -o ceph.client.1.keyring
      exported keyring for client.1
    2. Copy the client keyring from the Monitor node to the /etc/ceph/ directory on the client node:

      scp root@<monitor>:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring

      Replace <monitor> with the Monitor host name or IP, for example:

      [root@client ~]# scp root@192.168.0.1:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring
  5. Set the appropriate permissions for the keyring file.

    chmod 644 <keyring>

    Specify the path to the keyring, for example:

    [root@client ~]# chmod 644 /etc/ceph/ceph.client.1.keyring

Additional Resources

  • The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3

3.4. Mounting the Ceph File System as a kernel client

You can mount the Ceph File System as a kernel client:

Important

Clients on Linux distributions aside from Red Hat Enterprise Linux are permitted but not supported. If issues are found in the MDS or other parts of the cluster when using these clients, Red Hat will address them, but if the cause is found to be on the client side, the issue will have to be addressed by the kernel vendor.

3.4.1. Prerequisites

  • On the client node, enable the Red Hat Ceph Storage 3 Tools repository:

    • On Red Hat Enterprise Linux, use:

      [root@client ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
    • On Ubuntu, use:

      [user@client ~]$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list'
      [user@client ~]$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
      [user@client ~]$ sudo apt-get update
  • On the destination client node, create a new etc/ceph directory:

    [root@client ~]# mkdir /etc/ceph
  • Copy the Ceph configuration file from a Monitor node to the destination client node.

    scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf

    Replace <monitor> with the Monitor host name or IP address, for example:

    [root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
  • Set the correct owner and group on the ceph.conf file:

    [root@client ~]# chown ceph:ceph /etc/ceph/ceph.conf
  • Set the appropriate permissions for the configuration file:

    [root@client ~]# chmod 644 /etc/ceph/ceph.conf

3.4.2. Manually Mounting the Ceph File System as a kernel Client

To manually mount the Ceph File System as a kernel client, use the mount utility.

Prerequisites

  • A Ceph File System is created.
  • The ceph-common package is installed.

Procedure

  1. Create a mount directory:

    mkdir -p <mount-point>

    For example:

    [root@client]# mkdir -p /mnt/cephfs
  2. Mount the Ceph File System. To specify multiple Monitor addresses, either separate them with commas in the mount command, or configure a DNS server so that a single host name resolves to multiple IP addresses and pass that host name to the mount command. Set the user name and path to the secret file.

    mount -t ceph <monitor1-host-name>:6789,<monitor2-host-name>:6789,<monitor3-host-name>:6789:/ <mount-point> -o name=<user-name>,secretfile=<path>

    For example:

    [root@client ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,secretfile=/etc/ceph/ceph.client.1.secret
  3. Verify that the file system is successfully mounted:

    stat -f <mount-point>

    For example:

    [root@client ~]# stat -f /mnt/cephfs
Additional Resources
  • The mount(8) manual page
  • The DNS Servers chapter in the Networking Guide for Red Hat Enterprise Linux 7
  • The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3

3.4.3. Automatically Mounting the Ceph File System as a kernel Client

To automatically mount a Ceph File System on start, edit the /etc/fstab file.

Prerequisites
Procedure
  1. On the client host, create a new directory for mounting the Ceph File System.

    mkdir -p <mount-point>

    For example:

    [root@client ~]# mkdir -p /mnt/cephfs
  2. Edit the /etc/fstab file as follows:

    #DEVICE                 PATH           TYPE     OPTIONS
    <host-name>:<port>:/,   <mount-point>  ceph     _netdev,
    <host-name>:<port>:/,                           [name=<user-name>,
    <host-name>:<port>:/,        	                secret=<key>|
                                                    secretfile=<file>,
                                                    [<mount-options>]

    In the first column, set the Monitor host names and their ports. Another way to specify multiple Monitor addresses is to configure a DNS server so that a single host name resolves to multiple IP addresses.

    Set the mount point in the second column and the type to ceph in the third column.

    Set the user name and secret file in the fourth column using the name and secretfile options, respectively.

    Set the _netdev option to ensure that the file system is mounted after the networking subsystem to prevent networking issues. If you do not need access time information set noatime to increase performance.

    For example:

    #DEVICE         PATH                   TYPE    OPTIONS
    mon1:6789:/,    /mnt/cephfs            ceph    _netdev, name=admin,
    mon2:6789:/,	                               secretfile=
    mon3:6789:/                                    /home/secret.key,
                                                   noatime 00

    The file system will be mounted on the next boot.

3.5. Mounting the Ceph File System as a FUSE Client

You can mount the Ceph File System as a File System in User Space (FUSE) client:

3.5.1. Prerequisites

  • On the client node, enable the Red Hat Ceph Storage 3 Tools repository:

    • On Red Hat Enterprise Linux, use:

      [root@client ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
    • On Ubuntu, use:

      [user@client ~]$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list'
      [user@client ~]$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -'
      [user@client ~]$ sudo apt-get update
  • Copy the client keyring to the client node. See Section 3.3, “Creating Ceph File System Client Users” for details.
  • Copy the Ceph configuration file from a Monitor node to the client node.

    scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf

    Replace <monitor> with the Monitor host name or IP, for example:

    [root@client ~]# scp root@192.168.0.1:/ceph.conf /etc/ceph/ceph.conf
  • Set the appropriate permissions for the configuration file.

    [root@client ~]# chmod 644 /etc/ceph/ceph.conf

3.5.2. Manually Mounting the Ceph File System as a FUSE Client

To mount a Ceph File System as a File System in User Space (FUSE) client, use the ceph-fuse utility.

Prerequisites
  • On the node where the Ceph File System will be mounted, install the ceph-fuse package.

    • On Red Hat Enterprise Linux, use:

      [root@client ~]# yum install ceph-fuse
    • On Ubuntu, use:

      [user@client ~]$ sudo apt-get install ceph-fuse
Procedure
  1. Create a directory to serve as a mount point. Note that if you used the path option with MDS capabilities, the mount point must be within what is specified by path.

    mkdir <mount-point>

    For example:

    [root@client ~]# mkdir /mnt/mycephfs
  2. Use the ceph-fuse utility to mount the Ceph File System.

    ceph-fuse -n client.<client-name> <mount-point>

    For example:

    [root@client ~]# ceph-fuse -n client.1 /mnt/mycephfs
    • If you do not use the default name and location of the user keyring, that is /etc/ceph/ceph.client.<client-name/id>.keyring, use the --keyring option to specify the path to the user keyring, for example:

      [root@client ~]# ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs
    • If you restricted the client to a only mount and work within a certain directory, use the -r option to instruct the client to treat that path as its root:

      ceph-fuse -n client.<client-name/id> <mount-point> -r <path>

      For example, to instruct the client with ID 1 to treat the /home/cephfs/ directory as its root:

      [root@client ~]# ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs
  3. Verify that the file system is successfully mounted:

    stat -f <mount-point>

    For example:

    [user@client ~]$ stat -f /mnt/cephfs
Additional Resources
  • The ceph-fuse(8) manual page *
  • The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3

3.5.3. Automatically Mounting the Ceph File System as a FUSE Client

To automatically mount a Ceph File System on start, edit the /etc/fstab file.

Prerequisites
Procedure
  1. On the client host, create a new directory for mounting the Ceph File System.

    mkdir -p <mount-point>

    For example:

    [root@client ~]# mkdir -p /mnt/cephfs
  2. Edit the etc/fstab file as follows:

    #DEVICE     PATH            TYPE        OPTIONS
    none        <mount-point>   fuse.ceph   _netdev
                                            ceph.id=<user-id>
                                            [,ceph.conf=<path>],
                                            defaults  0 0

    Specify the use ID, for example admin, not client-admin, and the mount point. Use the conf option if you store the Ceph configuration file somewhere else than in the default location. In addition, specify required mount options. Consider to use the _netdev option that ensures that the file system is mounted after the networking subsystem to prevent networking issues. For example:

    #DEVICE     PATH        TYPE        OPTIONS
    none        /mnt/ceph   fuse.ceph   _netdev
                                        ceph.id=admin,
                                        ceph.conf=/etc/ceph/cluster.conf,
                                        defaults  0 0

    The file system will be mounted on the next boot.

3.6. Creating Ceph File Systems with erasure coding

By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools.

Important

Ceph File Systems on erasure-coded pools are a Technology Preview. For more information see Erasure Coding with Overwrites (Technology Preview).

Important

Ceph File Systems on erasure-coded pools require pools using the BlueStore object store. For more information see Erasure Coding with Overwrites (Technology Preview).

Important

Red Hat recommends to use the replicated pool as the default data pool.

Prerequisites

  • A running Red Hat Ceph Storage Cluster.
  • Pools using BlueStore OSDs.

Procedure

  1. Create an erasure-coded data pool for Ceph File System:

    ceph osd pool create $DATA_POOL $PG_NUM erasure

    For example, to create an erasure-coded pool named cephfs-data-ec with 64 placement groups:

    [root@monitor ~]# ceph osd pool create cephfs-data-ec 64 erasure
  2. Create a replicated metadata pool for Ceph File System:

    ceph osd pool create $METADATA_POOL $PG_NUM

    For example, to create a pool named cephfs-metadata with 64 placement groups:

    [root@monitor ~]# ceph osd pool create cephfs-metadata 64
  3. Enable overwrites on the erasure-coded pool:

    ceph osd pool set $DATA_POOL allow_ec_overwrites true

    For example, to enable overwrites on an erasure-coded pool named cephfs-data-ec:

    [root@monitor ~]# ceph osd pool set cephfs-data-ec allow_ec_overwrites true
  4. Create the Ceph File System:

    ceph fs new $FS_EC $METADATA_POOL $DATA_POOL
    Note

    Using an erasure-coded pool for the default data pool is discouraged, but you can use --force to override this default. Specify the name of the Ceph File System, and the metadata and data pools, for example:

    [root@monitor ~]# ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec --force
  5. Verify that one or more MDSs enter the active state based on your configuration:

    ceph fs status $FS_EC

    Specify the name of the Ceph File System, for example:

    [root@monitor ~]# ceph fs status cephfs-ec
    cephfs-ec - 0 clients
    ======
    +------+--------+-------+---------------+-------+-------+
    | Rank | State  |  MDS  |    Activity   |  dns  |  inos |
    +------+--------+-------+---------------+-------+-------+
    |  0   | active | node1 | Reqs:    0 /s |   10  |   12  |
    +------+--------+-------+---------------+-------+-------+
    +-----------------+----------+-------+-------+
    |       Pool      |   type   |  used | avail |
    +-----------------+----------+-------+-------+
    | cephfs-metadata | metadata | 4638  | 26.7G |
    |  cephfs-data-ec |   data   |    0  | 26.7G |
    +-----------------+----------+-------+-------+
    
    +-------------+
    | Standby MDS |
    +-------------+
    |    node3    |
    |    node2    |
    +-------------+
  6. If you want to add an additional erasure-coded pool, as data pool, to the existing file system,:

    1. Create an erasure-coded data pool for Ceph File System:

      ceph osd pool create $DATA_POOL $PG_NUM erasure

      For example, to create an erasure-coded pool named cephfs-data-ec1 with 64 placement groups:

      [root@monitor ~]# ceph osd pool create cephfs-data-ec1 64 erasure
    2. Enable overwrites on the erasure-coded pool:

      ceph osd pool set $DATA_POOL allow_ec_overwrites true

      For example, to enable overwrites on an erasure-coded pool named cephfs-data-ec1:

      [root@monitor ~]# ceph osd pool set cephfs-data-ec1 allow_ec_overwrites true
    3. Add the newly created pool to an existing Ceph File System:

      ceph fs add_data_pool $FS_EC $DATA_POOL

      For example, to add an erasure-coded pool named cephfs-data-ec1:

      [root@monitor ~]# ceph fs add_data_pool cephfs-ec cephfs-data-ec1
    4. Verify that one or more MDSs enter the active state based on your configuration.

      ceph fs status $FS_EC

      Specify the name of the Ceph File System, for example:

      [root@monitor ~]# ceph fs status cephfs-ec
      cephfs-ec - 0 clients
      ======
      +------+--------+-------+---------------+-------+-------+
      | Rank | State  |  MDS  |    Activity   |  dns  |  inos |
      +------+--------+-------+---------------+-------+-------+
      |  0   | active | node1 | Reqs:    0 /s |   10  |   12  |
      +------+--------+-------+---------------+-------+-------+
      +-----------------+----------+-------+-------+
      |       Pool      |   type   |  used | avail |
      +-----------------+----------+-------+-------+
      | cephfs-metadata | metadata | 4638  | 26.7G |
      |  cephfs-data-ec |   data   |    0  | 26.7G |
      |  cephfs-data-ec1|   data   |    0  | 26.7G |
      +-----------------+----------+-------+-------+
      
      +-------------+
      | Standby MDS |
      +-------------+
      |    node3    |
      |    node2    |
      +-------------+

Additional Resources

Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.