Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 3. Deploying Ceph File Systems
This chapter describes how to create and mount Ceph File Systems.
To deploy a Ceph File System:
- Create a Ceph file system on a Monitor node. See Section 3.2, “Creating the Ceph File Systems” for details.
- Create a client user with the correct access rights and permissions and make its key available on the node where the Ceph File System will be mounted. See Section 3.3, “Creating Ceph File System Client Users” for details.
Mount CephFS on a dedicated node. Choose one of the following methods:
- Mounting CephFS as a kernel client. See Section 3.4, “Mounting the Ceph File System as a kernel client”
- Mounting CephFS as a FUSE client. See Section 3.5, “Mounting the Ceph File System as a FUSE Client”
3.1. Prerequisites Copier lienLien copié sur presse-papiers!
- Deploy a Ceph Storage Cluster if you do not have one. For details, see the Installation Guide for Red Hat Enterprise Linux or Ubuntu.
-
Install and configure Ceph Metadata Server daemons (
ceph-mds). For details, see the the Installation Guide for Red Hat Enterprise Linux or Ubuntu and Chapter 2, Configuring Metadata Server Daemons.
3.2. Creating the Ceph File Systems Copier lienLien copié sur presse-papiers!
This section describes how to create a Ceph File System on a Monitor node.
By default, you can create only one Ceph File System in the Ceph Storage Cluster. See Section 1.3, “CephFS Limitations” for details.
Prerequisites
- Deploy a Ceph Storage Cluster if you do not have one. For details, see the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu.
-
Install and configure Ceph Metadata Server daemons (
ceph-mds). For details, see Installing Metadata Servers in the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu. Install
ceph-commonpackage.On Red Hat Enterprise Linux:
yum install ceph-common
# yum install ceph-commonCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ubuntu:
sudo apt-get install ceph-common
$ sudo apt-get install ceph-commonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To enable the repo and install ceph-common package on the defined client nodes, see Installing the Ceph Client Role in the Installation Guide for Red Hat Enterprise Linux or the Installation Guide for Ubuntu.
Procedure
Use the following commands from a Monitor host and as the root user.
Create two pools, one for storing data and one for storing metadata:
ceph osd pool create <name> <pg_num>
ceph osd pool create <name> <pg_num>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the pool name and the number of placement groups (PGs), for example:
ceph osd pool create cephfs-data 64 ceph osd pool create cephfs-metadata 64
[root@monitor ~]# ceph osd pool create cephfs-data 64 [root@monitor ~]# ceph osd pool create cephfs-metadata 64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Typically, the metadata pool can start with a conservative number of PGs as it will generally have far fewer objects than the data pool. It is possible to increase the number of PGs if needed. Recommended metadata pool sizes range from 64 PGs to 512 PGs. Size the data pool proportional to the number and sizes of files you expect in the file system.
ImportantFor the metadata pool, consider using
- A higher replication level because any data loss to this pool can make the whole file system inaccessible
- Storage with lower latency such as Solid-state Drive (SSD) disks because this directly affects the observed latency of file system operations on clients
Create the Ceph File System:
ceph fs new <name> <metadata-pool> <data-pool>
ceph fs new <name> <metadata-pool> <data-pool>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the Ceph File System, the metadata and data pool, for example:
ceph fs new cephfs cephfs-metadata cephfs-data
[root@monitor ~]# ceph fs new cephfs cephfs-metadata cephfs-dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that one or more MDSs enter to the active state based on you configuration.
ceph fs status <name>
ceph fs status <name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the Ceph File System, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- The Enabling the Red Hat Ceph Storage Repositories section in Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux
- The Enabling the Red Hat Ceph Storage Repositories Red Hat Ceph Storage 3 Installation Guide for Ubuntu
- The Pools chapter in the Storage Strategies guide for Red Hat Ceph Storage 3
3.3. Creating Ceph File System Client Users Copier lienLien copié sur presse-papiers!
Red Hat Ceph Storage 3 uses cephx for authentication, which is enabled by default. To use cephx with Ceph File System, create a user with the correct authorization capabilities on a Monitor node and make its key available on the node where the Ceph File System will be mounted.
To make the key available for use with the kernel client, create a secret file on the client node with the key inside it. To make the key available for the File System in User Space (FUSE) client, copy the keyring to the client node.
Procedure
On a Monitor host, create a client user.
ceph auth get-or-create client.<id> <capabilities>
ceph auth get-or-create client.<id> <capabilities>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the client ID and desired capabilities.
To restrict the client to only write to and read from a particular pool in the cluster:
ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=<pool>'
ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=<pool>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to restrict the client to only write to and read from the
datapool:ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=data'
[root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow rw' osd 'allow rw pool=data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow To prevent the client from modifying the pool that is used for files and directories:
ceph auth get-or-create client.1 mon 'allow r' mds 'allow r' osd 'allow r pool=<pool>'
[root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow r' osd 'allow r pool=<pool>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to prevent the client from modifying
datapool:ceph auth get-or-create client.1 mon 'allow r' mds 'allow r' osd 'allow r pool=data'
[root@monitor ~]# ceph auth get-or-create client.1 mon 'allow r' mds 'allow r' osd 'allow r pool=data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not create capabilities for the
metadatapool, as Ceph File System clients do not have access to it.
Verify the created key:
ceph auth get client.<id>
ceph auth get client.<id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
ceph auth get client.1
[root@monitor ~]# ceph auth get client.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you plan to use the kernel client, create a secret file using the key retrieved from the previous step.
On the client node, copy the string after
key =into/etc/ceph/ceph.client.<id>.secret:For example, if the client ID is
1add a single line to/etc/ceph/ceph.client.1.secretwith the key:[root@client ~]# cat /etc/ceph/ceph.client.1.secret AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
[root@client ~]# cat /etc/ceph/ceph.client.1.secret AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantDo not include the space in between
key =and the string or else mounting will not work.If you plan to use the File System in User Space (FUSE) client, copy the keyring to the client.
On the Monitor node, export the keyring to a file:
ceph auth get client.<id> -o ceph.client.<id>.keyring
# ceph auth get client.<id> -o ceph.client.<id>.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, if the client ID is
1:ceph auth get client.1 -o ceph.client.1.keyring
[root@monitor ~]# ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the client keyring from the Monitor node to the
/etc/ceph/directory on the client node:scp root@<monitor>:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring
scp root@<monitor>:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<monitor>with the Monitor host name or IP, for example:scp root@192.168.0.1:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring
[root@client ~]# scp root@192.168.0.1:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the appropriate permissions for the keyring file.
chmod 644 <keyring>
chmod 644 <keyring>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the path to the keyring, for example:
chmod 644 /etc/ceph/ceph.client.1.keyring
[root@client ~]# chmod 644 /etc/ceph/ceph.client.1.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3
3.4. Mounting the Ceph File System as a kernel client Copier lienLien copié sur presse-papiers!
You can mount the Ceph File System as a kernel client:
Clients on Linux distributions aside from Red Hat Enterprise Linux are permitted but not supported. If issues are found in the MDS or other parts of the cluster when using these clients, Red Hat will address them, but if the cause is found to be on the client side, the issue will have to be addressed by the kernel vendor.
3.4.1. Prerequisites Copier lienLien copié sur presse-papiers!
On the client node, enable the Red Hat Ceph Storage 3 Tools repository:
On Red Hat Enterprise Linux, use:
subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
[root@client ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ubuntu, use:
sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' sudo apt-get update
[user@client ~]$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' [user@client ~]$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' [user@client ~]$ sudo apt-get updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow
On the destination client node, create a new
etc/cephdirectory:mkdir /etc/ceph
[root@client ~]# mkdir /etc/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file from a Monitor node to the destination client node.
scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<monitor>with the Monitor host name or IP address, for example:scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the correct owner and group on the
ceph.conffile:chown ceph:ceph /etc/ceph/ceph.conf
[root@client ~]# chown ceph:ceph /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate permissions for the configuration file:
chmod 644 /etc/ceph/ceph.conf
[root@client ~]# chmod 644 /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.2. Manually Mounting the Ceph File System as a kernel Client Copier lienLien copié sur presse-papiers!
To manually mount the Ceph File System as a kernel client, use the mount utility.
Prerequisites
- A Ceph File System is created.
-
The
ceph-commonpackage is installed.
Procedure
Create a mount directory:
mkdir -p <mount-point>
mkdir -p <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
mkdir -p /mnt/cephfs
[root@client]# mkdir -p /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the Ceph File System. To specify multiple Monitor addresses, either separate them with commas in the
mountcommand, or configure a DNS server so that a single host name resolves to multiple IP addresses and pass that host name to themountcommand. Set the user name and path to the secret file.mount -t ceph <monitor1-host-name>:6789,<monitor2-host-name>:6789,<monitor3-host-name>:6789:/ <mount-point> -o name=<user-name>,secretfile=<path>
mount -t ceph <monitor1-host-name>:6789,<monitor2-host-name>:6789,<monitor3-host-name>:6789:/ <mount-point> -o name=<user-name>,secretfile=<path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,secretfile=/etc/ceph/ceph.client.1.secret
[root@client ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,secretfile=/etc/ceph/ceph.client.1.secretCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the file system is successfully mounted:
stat -f <mount-point>
stat -f <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
stat -f /mnt/cephfs
[root@client ~]# stat -f /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
-
The
mount(8)manual page - The DNS Servers chapter in the Networking Guide for Red Hat Enterprise Linux 7
- The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3
3.4.3. Automatically Mounting the Ceph File System as a kernel Client Copier lienLien copié sur presse-papiers!
To automatically mount a Ceph File System on start, edit the /etc/fstab file.
Prerequisites
- Consider to mount the file system manually first. See Section 3.4.2, “Manually Mounting the Ceph File System as a kernel Client” for details.
-
If you want to use the
secretefile=mounting option, install theceph-commonpackage.
Procedure
On the client host, create a new directory for mounting the Ceph File System.
mkdir -p <mount-point>
mkdir -p <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
mkdir -p /mnt/cephfs
[root@client ~]# mkdir -p /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
/etc/fstabfile as follows:Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the first column, set the Monitor host names and their ports. Another way to specify multiple Monitor addresses is to configure a DNS server so that a single host name resolves to multiple IP addresses.
Set the mount point in the second column and the type to
cephin the third column.Set the user name and secret file in the fourth column using the
nameandsecretfileoptions, respectively.Set the
_netdevoption to ensure that the file system is mounted after the networking subsystem to prevent networking issues. If you do not need access time information setnoatimeto increase performance.For example:
#DEVICE PATH TYPE OPTIONS mon1:6789:/, /mnt/cephfs ceph _netdev, name=admin, mon2:6789:/, secretfile= mon3:6789:/ /home/secret.key, noatime 00#DEVICE PATH TYPE OPTIONS mon1:6789:/, /mnt/cephfs ceph _netdev, name=admin, mon2:6789:/, secretfile= mon3:6789:/ /home/secret.key, noatime 00Copy to Clipboard Copied! Toggle word wrap Toggle overflow The file system will be mounted on the next boot.
3.5. Mounting the Ceph File System as a FUSE Client Copier lienLien copié sur presse-papiers!
You can mount the Ceph File System as a File System in User Space (FUSE) client:
3.5.1. Prerequisites Copier lienLien copié sur presse-papiers!
On the client node, enable the Red Hat Ceph Storage 3 Tools repository:
On Red Hat Enterprise Linux, use:
subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
[root@client ~]# subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ubuntu, use:
sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' sudo apt-get update
[user@client ~]$ sudo bash -c 'umask 0077; echo deb https://customername:customerpasswd@rhcs.download.redhat.com/3-updates/Tools $(lsb_release -sc) main | tee /etc/apt/sources.list.d/Tools.list' [user@client ~]$ sudo bash -c 'wget -O - https://www.redhat.com/security/fd431d51.txt | apt-key add -' [user@client ~]$ sudo apt-get updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow
- Copy the client keyring to the client node. See Section 3.3, “Creating Ceph File System Client Users” for details.
Copy the Ceph configuration file from a Monitor node to the client node.
scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp root@<monitor>:/etc/ceph/ceph.conf /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<monitor>with the Monitor host name or IP, for example:scp root@192.168.0.1:/ceph.conf /etc/ceph/ceph.conf
[root@client ~]# scp root@192.168.0.1:/ceph.conf /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate permissions for the configuration file.
chmod 644 /etc/ceph/ceph.conf
[root@client ~]# chmod 644 /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5.2. Manually Mounting the Ceph File System as a FUSE Client Copier lienLien copié sur presse-papiers!
To mount a Ceph File System as a File System in User Space (FUSE) client, use the ceph-fuse utility.
Prerequisites
On the node where the Ceph File System will be mounted, install the
ceph-fusepackage.On Red Hat Enterprise Linux, use:
yum install ceph-fuse
[root@client ~]# yum install ceph-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ubuntu, use:
sudo apt-get install ceph-fuse
[user@client ~]$ sudo apt-get install ceph-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create a directory to serve as a mount point. Note that if you used the
pathoption with MDS capabilities, the mount point must be within what is specified bypath.mkdir <mount-point>
mkdir <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
mkdir /mnt/mycephfs
[root@client ~]# mkdir /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ceph-fuseutility to mount the Ceph File System.ceph-fuse -n client.<client-name> <mount-point>
ceph-fuse -n client.<client-name> <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
ceph-fuse -n client.1 /mnt/mycephfs
[root@client ~]# ceph-fuse -n client.1 /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you do not use the default name and location of the user keyring, that is
/etc/ceph/ceph.client.<client-name/id>.keyring, use the--keyringoption to specify the path to the user keyring, for example:ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs
[root@client ~]# ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you restricted the client to a only mount and work within a certain directory, use the
-roption to instruct the client to treat that path as its root:ceph-fuse -n client.<client-name/id> <mount-point> -r <path>
ceph-fuse -n client.<client-name/id> <mount-point> -r <path>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to instruct the client with ID
1to treat the/home/cephfs/directory as its root:ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs
[root@client ~]# ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the file system is successfully mounted:
stat -f <mount-point>
stat -f <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
stat -f /mnt/cephfs
[user@client ~]$ stat -f /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
-
The
ceph-fuse(8)manual page * - The User Management chapter in the Administration Guide for Red Hat Ceph Storage 3
3.5.3. Automatically Mounting the Ceph File System as a FUSE Client Copier lienLien copié sur presse-papiers!
To automatically mount a Ceph File System on start, edit the /etc/fstab file.
Prerequisites
- Consider to mount the file system manually first. See Section 3.4.2, “Manually Mounting the Ceph File System as a kernel Client” for details.
Procedure
On the client host, create a new directory for mounting the Ceph File System.
mkdir -p <mount-point>
mkdir -p <mount-point>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
mkdir -p /mnt/cephfs
[root@client ~]# mkdir -p /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
etc/fstabfile as follows:#DEVICE PATH TYPE OPTIONS none <mount-point> fuse.ceph _netdev ceph.id=<user-id> [,ceph.conf=<path>], defaults 0 0#DEVICE PATH TYPE OPTIONS none <mount-point> fuse.ceph _netdev ceph.id=<user-id> [,ceph.conf=<path>], defaults 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the use ID, for example
admin, notclient-admin, and the mount point. Use theconfoption if you store the Ceph configuration file somewhere else than in the default location. In addition, specify required mount options. Consider to use the_netdevoption that ensures that the file system is mounted after the networking subsystem to prevent networking issues. For example:#DEVICE PATH TYPE OPTIONS none /mnt/ceph fuse.ceph _netdev ceph.id=admin, ceph.conf=/etc/ceph/cluster.conf, defaults 0 0#DEVICE PATH TYPE OPTIONS none /mnt/ceph fuse.ceph _netdev ceph.id=admin, ceph.conf=/etc/ceph/cluster.conf, defaults 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow The file system will be mounted on the next boot.
3.6. Creating Ceph File Systems with erasure coding Copier lienLien copié sur presse-papiers!
By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools.
Ceph File Systems on erasure-coded pools are a Technology Preview. For more information see Erasure Coding with Overwrites (Technology Preview).
Ceph File Systems on erasure-coded pools require pools using the BlueStore object store. For more information see Erasure Coding with Overwrites (Technology Preview).
Red Hat recommends to use the replicated pool as the default data pool.
Prerequisites
- A running Red Hat Ceph Storage Cluster.
- Pools using BlueStore OSDs.
Procedure
Create an erasure-coded data pool for Ceph File System:
ceph osd pool create $DATA_POOL $PG_NUM erasure
ceph osd pool create $DATA_POOL $PG_NUM erasureCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to create an erasure-coded pool named
cephfs-data-ecwith 64 placement groups:ceph osd pool create cephfs-data-ec 64 erasure
[root@monitor ~]# ceph osd pool create cephfs-data-ec 64 erasureCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a replicated metadata pool for Ceph File System:
ceph osd pool create $METADATA_POOL $PG_NUM
ceph osd pool create $METADATA_POOL $PG_NUMCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to create a pool named
cephfs-metadatawith 64 placement groups:ceph osd pool create cephfs-metadata 64
[root@monitor ~]# ceph osd pool create cephfs-metadata 64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable overwrites on the erasure-coded pool:
ceph osd pool set $DATA_POOL allow_ec_overwrites true
ceph osd pool set $DATA_POOL allow_ec_overwrites trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to enable overwrites on an erasure-coded pool named
cephfs-data-ec:ceph osd pool set cephfs-data-ec allow_ec_overwrites true
[root@monitor ~]# ceph osd pool set cephfs-data-ec allow_ec_overwrites trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the Ceph File System:
ceph fs new $FS_EC $METADATA_POOL $DATA_POOL
ceph fs new $FS_EC $METADATA_POOL $DATA_POOLCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUsing an erasure-coded pool for the default data pool is discouraged, but you can use
--forceto override this default. Specify the name of the Ceph File System, and the metadata and data pools, for example:ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec --force
[root@monitor ~]# ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that one or more MDSs enter the active state based on your configuration:
ceph fs status $FS_EC
ceph fs status $FS_ECCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the Ceph File System, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to add an additional erasure-coded pool, as data pool, to the existing file system,:
Create an erasure-coded data pool for Ceph File System:
ceph osd pool create $DATA_POOL $PG_NUM erasure
ceph osd pool create $DATA_POOL $PG_NUM erasureCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to create an erasure-coded pool named
cephfs-data-ec1with 64 placement groups:ceph osd pool create cephfs-data-ec1 64 erasure
[root@monitor ~]# ceph osd pool create cephfs-data-ec1 64 erasureCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable overwrites on the erasure-coded pool:
ceph osd pool set $DATA_POOL allow_ec_overwrites true
ceph osd pool set $DATA_POOL allow_ec_overwrites trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to enable overwrites on an erasure-coded pool named
cephfs-data-ec1:ceph osd pool set cephfs-data-ec1 allow_ec_overwrites true
[root@monitor ~]# ceph osd pool set cephfs-data-ec1 allow_ec_overwrites trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the newly created pool to an existing Ceph File System:
ceph fs add_data_pool $FS_EC $DATA_POOL
ceph fs add_data_pool $FS_EC $DATA_POOLCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to add an erasure-coded pool named
cephfs-data-ec1:ceph fs add_data_pool cephfs-ec cephfs-data-ec1
[root@monitor ~]# ceph fs add_data_pool cephfs-ec cephfs-data-ec1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that one or more MDSs enter the active state based on your configuration.
ceph fs status $FS_EC
ceph fs status $FS_ECCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the Ceph File System, for example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Erasure-Coded Pools section in the Red Hat Ceph Storage Storage Strategies Guide for more information.
- See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information.