Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Deployment of the Ceph File System
As a storage administrator, you can deploy Ceph File Systems (CephFS) in a storage environment and have clients mount those Ceph File Systems to meet the storage needs.
Basically, the deployment workflow is three steps:
- Create Ceph File Systems on a Ceph Monitor node.
- Create a Ceph client user with the appropriate capabilities, and make the client key available on the node where the Ceph File System will be mounted.
- Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space (FUSE) client.
3.1. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- A running, and healthy Red Hat Ceph Storage cluster.
-
Installation and configuration of the Ceph Metadata Server daemon (
ceph-mds).
3.2. Layout, quota, snapshot, and network restrictions Link kopierenLink in die Zwischenablage kopiert!
These user capabilities can help you restrict access to a Ceph File System (CephFS) based on the needed requirements.
All user capability flags, except rw, must be specified in alphabetical order.
Layouts and Quotas
When using layouts or quotas, clients require the p flag, in addition to rw capabilities. Setting the p flag restricts all the attributes being set by special extended attributes, those with a ceph. prefix. Also, this restricts other means of setting these fields, such as openc operations with layouts.
Example
In this example, client.0 can modify layouts and quotas on the file system cephfs_a, but client.1 cannot.
Snapshots
When creating or deleting snapshots, clients require the s flag, in addition to rw capabilities. When the capability string also contains the p flag, the s flag must appear after it.
Example
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow rw, allow rws path=/temp
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow rw, allow rws path=/temp
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
In this example, client.0 can create or delete snapshots in the temp directory of file system cephfs_a.
Network
Restricting clients connecting from a particular network.
Example
client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8
caps: [mon] allow r network 10.0.0.0/8
caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8
The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16.
Additional Resources
- See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities.
3.3. Creating Ceph File Systems Link kopierenLink in die Zwischenablage kopiert!
You can create multiple Ceph File Systems (CephFS) on a Ceph Monitor node.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
-
Installation and configuration of the Ceph Metadata Server daemon (
ceph-mds). - Root-level access to a Ceph Monitor node.
- Root-level access to a Ceph client node.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage Tools repository:
Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-fusepackage:dnf install ceph-fuse
[root@client ~]# dnf install ceph-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate permissions for the configuration file:
chmod 644 /etc/ceph/ceph.conf
[root@client ~]# chmod 644 /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a Ceph File System:
Syntax
ceph fs volume create FILE_SYSTEM_NAME
ceph fs volume create FILE_SYSTEM_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs volume create cephfs01
[root@mon ~]# ceph fs volume create cephfs01Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat this step to create additional file systems.
NoteBy running this command, Ceph automatically creates the new pools, and deploys a new Ceph Metadata Server (MDS) daemon to support the new file system. This also configures the MDS affinity accordingly.
Verify access to the new Ceph File System from a Ceph client.
Authorize a Ceph client to access the new file system:
Syntax
ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS
ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONSCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantThe supported values for PERMISSIONS are
r(read) andrw(read/write).Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOptionally, you can add a safety measure by specifying the
root_squashoption. This prevents accidental deletion scenarios by disallowing clients with auid=0orgid=0to do write operations, but still allows read operations.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
root_squashis enabled for the file systemcephfs01, except within the/volumesdirectory tree.ImportantThe Ceph client can only see the CephFS it is authorized for.
Copy the Ceph user’s keyring to the Ceph client node:
Syntax
ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME:/etc/ceph
ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME:/etc/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph auth get client.1 > ceph.client.1.keyring scp ceph.client.1.keyring client:/etc/ceph
[root@mon ~]# ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 [root@mon ~]# scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the Ceph client node, create a new directory:
Syntax
mkdir PATH_TO_NEW_DIRECTORY_NAME
mkdir PATH_TO_NEW_DIRECTORY_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /mnt/mycephfs
[root@client ~]# mkdir /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the Ceph client node, mount the new Ceph File System:
Syntax
ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME
ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01
[root@client ~]# ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow - On the Ceph client node, list the directory contents of the new mount point, or create a file on the new mount point.
3.4. Adding an erasure-coded pool to a Ceph File System Link kopierenLink in die Zwischenablage kopiert!
By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool to the Ceph File System, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools.
CephFS EC pools are for archival purpose only.
For production environments, Red Hat recommends using the default replicated data pool for CephFS. The creation of inodes in CephFS creates at least one object in the default data pool. It is better to use a replicated pool for the default data to improve small-object write performance, and to improve read performance for updating backtraces.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- An existing Ceph File System.
- Pools using BlueStore OSDs.
- Root-level access to a Ceph Monitor node.
-
Installation of the
attrpackage.
Procedure
Create an erasure-coded data pool for CephFS:
Syntax
ceph osd pool create DATA_POOL_NAME erasure
ceph osd pool create DATA_POOL_NAME erasureCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create cephfs-data-ec01 erasure
[root@mon ~]# ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the pool was added:
Example
ceph osd lspools
[root@mon ~]# ceph osd lspoolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable overwrites on the erasure-coded pool:
Syntax
ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true
ceph osd pool set DATA_POOL_NAME allow_ec_overwrites trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true
[root@mon ~]# ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the status of the Ceph File System:
Syntax
ceph fs status FILE_SYSTEM_NAME
ceph fs status FILE_SYSTEM_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the erasure-coded data pool to the existing CephFS:
Syntax
ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME
ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs add_data_pool cephfs-ec cephfs-data-ec01
[root@mon ~]# ceph fs add_data_pool cephfs-ec cephfs-data-ec01Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example adds the new data pool,
cephfs-data-ec01, to the existing erasure-coded file system,cephfs-ec.Verify that the erasure-coded pool was added to the Ceph File System:
Syntax
ceph fs status FILE_SYSTEM_NAME
ceph fs status FILE_SYSTEM_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the file layout on a new directory:
Syntax
mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY
mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORYCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /mnt/cephfs/newdir setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir
[root@mon ~]# mkdir /mnt/cephfs/newdir [root@mon ~]# setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdirCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, all new files created in the
/mnt/cephfs/newdirdirectory inherit the directory layout and places the data in the newly added erasure-coded pool.
Additional Resources
- See The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide for more information about CephFS MDS.
- See the Creating Ceph File Systems section in the Red Hat Ceph Storage File System Guide for more information.
- See the Erasure Code Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more information.
- See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information.
3.5. Creating client users for a Ceph File System Link kopierenLink in die Zwischenablage kopiert!
Red Hat Ceph Storage uses cephx for authentication, which is enabled by default. To use cephx with the Ceph File System, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System will be mounted.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Installation and configuration of the Ceph Metadata Server daemon (ceph-mds).
- Root-level access to a Ceph Monitor node.
- Root-level access to a Ceph client node.
Procedure
Log into the Cephadm shell on the monitor node:
Example
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow On a Ceph Monitor node, create a client user:
Syntax
ceph fs authorize FILE_SYSTEM_NAME client.CLIENT_NAME /DIRECTORY CAPABILITY [/DIRECTORY CAPABILITY] ...
ceph fs authorize FILE_SYSTEM_NAME client.CLIENT_NAME /DIRECTORY CAPABILITY [/DIRECTORY CAPABILITY] ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow To restrict the client to only writing in the
tempdirectory of filesystemcephfs_a:Example
[ceph: root@host01 /]# ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
[ceph: root@host01 /]# ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==Copy to Clipboard Copied! Toggle word wrap Toggle overflow To completely restrict the client to the
tempdirectory, remove the root (/) directory:Example
[ceph: root@host01 /]# ceph fs authorize cephfs_a client.1 /temp rw
[ceph: root@host01 /]# ceph fs authorize cephfs_a client.1 /temp rwCopy to Clipboard Copied! Toggle word wrap Toggle overflow
NoteSupplying
allor asterisk as the file system name grants access to every file system. Typically, it is necessary to quote the asterisk to protect it from the shell.Verify the created key:
Syntax
ceph auth get client.ID
ceph auth get client.IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the keyring to the client.
On the Ceph Monitor node, export the keyring to a file:
Syntax
ceph auth get client.ID -o ceph.client.ID.keyring
ceph auth get client.ID -o ceph.client.ID.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1
[ceph: root@host01 /]# ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the client keyring from the Ceph Monitor node to the
/etc/ceph/directory on the client node:Syntax
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyring
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CLIENT_NODE_NAME with the Ceph client node name or IP.
Example
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow
From the client node, set the appropriate permissions for the keyring file:
Syntax
chmod 644 ceph.client.ID.keyring
chmod 644 ceph.client.ID.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
chmod 644 /etc/ceph/ceph.client.1.keyring
[root@client01 ~]# chmod 644 /etc/ceph/ceph.client.1.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details.
3.6. Mounting the Ceph File System as a kernel client Link kopierenLink in die Zwischenablage kopiert!
You can mount the Ceph File System (CephFS) as a kernel client, either manually or automatically on system boot.
Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, Red Hat will address them. If the cause is found to be on the client side, then the issue will have to be addressed by the kernel vendor of the Linux distribution.
Prerequisites
- Root-level access to a Linux-based client node.
- Root-level access to a Ceph Monitor node.
- An existing Ceph File System.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage Tools repository:
Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-commonpackage:dnf install ceph-common
[root@client01 ~]# dnf install ceph-commonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the Cephadm shell on the monitor node:
Example
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyring
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp /etc/ceph/ceph.conf root@CLIENT_NODE_NAME:/etc/ceph/ceph.conf
scp /etc/ceph/ceph.conf root@CLIENT_NODE_NAME:/etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf
[ceph: root@host01 /]# scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the client node, set the appropriate permissions for the configuration file:
chmod 644 /etc/ceph/ceph.conf
[root@client01 ~]# chmod 644 /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Choose either automatically or manually mounting.
Manually Mounting
Create a mount directory on the client node:
Syntax
mkdir -p MOUNT_POINT
mkdir -p MOUNT_POINTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir -p /mnt/cephfs
[root@client01 ~]# mkdir -p /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with commas in the
mountcommand, specify the mount point, and set the client name:NoteAs of Red Hat Ceph Storage 4.1,
mount.cephcan read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID withname=CLIENT_ID, andmount.cephwill find the right keyring file.Syntax
mount -t ceph MONITOR-1_NAME:6789,MONITOR-2_NAME:6789,MONITOR-3_NAME:6789:/ MOUNT_POINT -o name=CLIENT_ID,fs=FILE_SYSTEM_NAME
mount -t ceph MONITOR-1_NAME:6789,MONITOR-2_NAME:6789,MONITOR-3_NAME:6789:/ MOUNT_POINT -o name=CLIENT_ID,fs=FILE_SYSTEM_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01
[root@client01 ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can configure a DNS server so that a single host name resolves to multiple IP addresses. Then you can use that single host name with the
mountcommand, instead of supplying a comma-separated list.NoteYou can also replace the Monitor host names with the string
:/andmount.cephwill read the Ceph configuration file to determine which Monitors to connect to.NoteYou can set the
nowsyncoption to asynchronously execute file creation and removal on the Red Hat Ceph Storage clusters. This improves the performance of some workloads by avoiding round-trip latency for these system calls without impacting consistency. Thenowsyncoption requires kernel clients with Red Hat Enterprise Linux 8.4 or later.Example
mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01
[root@client01 ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the file system is successfully mounted:
Syntax
stat -f MOUNT_POINT
stat -f MOUNT_POINTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
stat -f /mnt/cephfs
[root@client01 ~]# stat -f /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Automatically Mounting
On the client host, create a new directory for mounting the Ceph File System.
Syntax
mkdir -p MOUNT_POINT
mkdir -p MOUNT_POINTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir -p /mnt/cephfs
[root@client01 ~]# mkdir -p /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
/etc/fstabfile as follows:Syntax
#DEVICE PATH TYPE OPTIONS MON_0_HOST:PORT, MOUNT_POINT ceph name=CLIENT_ID, MON_1_HOST:PORT, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, fs=FILE_SYSTEM_NAME, MON_2_HOST:PORT:/q[_VOL_]/SUB_VOL/UID_SUB_VOL, [ADDITIONAL_OPTIONS]
#DEVICE PATH TYPE OPTIONS MON_0_HOST:PORT, MOUNT_POINT ceph name=CLIENT_ID, MON_1_HOST:PORT, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, fs=FILE_SYSTEM_NAME, MON_2_HOST:PORT:/q[_VOL_]/SUB_VOL/UID_SUB_VOL, [ADDITIONAL_OPTIONS]Copy to Clipboard Copied! Toggle word wrap Toggle overflow The first column sets the Ceph Monitor host names and the port number.
The second column sets the mount point
The third column sets the file system type, in this case,
ceph, for CephFS.The fourth column sets the various options, such as, the user name and the secret file using the
nameandsecretfileoptions. You can also set specific volumes, sub-volume groups, and sub-volumes using theceph.client_mountpointoption.Set the
_netdevoption to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting thenoatimeoption can increase performance.Set the fifth and sixth columns to zero.
Example
#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatimeCopy to Clipboard Copied! Toggle word wrap Toggle overflow The Ceph File System will be mounted on the next system boot.
NoteAs of Red Hat Ceph Storage 4.1,
mount.cephcan read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID withname=CLIENT_ID, andmount.cephwill find the right keyring file.NoteYou can also replace the Monitor host names with the string
:/andmount.cephwill read the Ceph configuration file to determine which Monitors to connect to.
Additional Resources
-
See the
mount(8)manual page. - See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user.
- See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details.
3.7. Mounting the Ceph File System as a FUSE client Link kopierenLink in die Zwischenablage kopiert!
You can mount the Ceph File System (CephFS) as a File System in User Space (FUSE) client, either manually or automatically on system boot.
Prerequisites
- Root-level access to a Linux-based client node.
- Root-level access to a Ceph Monitor node.
- An existing Ceph File System.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage Tools repository:
Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-fusepackage:dnf install ceph-fuse
[root@client01 ~]# dnf install ceph-fuseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the Cephadm shell on the monitor node:
Example
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyring
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp /etc/ceph/ceph.conf root@CLIENT_NODE_NAME:/etc/ceph/ceph.conf
scp /etc/ceph/ceph.conf root@CLIENT_NODE_NAME:/etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf
[ceph: root@host01 /]# scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the client node, set the appropriate permissions for the configuration file:
chmod 644 /etc/ceph/ceph.conf
[root@client01 ~]# chmod 644 /etc/ceph/ceph.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Choose either automatically or manually mounting.
Manually Mounting
On the client node, create a directory for the mount point:
Syntax
mkdir PATH_TO_MOUNT_POINT
mkdir PATH_TO_MOUNT_POINTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /mnt/mycephfs
[root@client01 ~]# mkdir /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you used the
pathoption with MDS capabilities, then the mount point must be within what is specified by thepath.Use the
ceph-fuseutility to mount the Ceph File System.Syntax
ceph-fuse -n client.CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT
ceph-fuse -n client.CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs
[root@client01 ~]# ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not use the default name and location of the user keyring, that is
/etc/ceph/ceph.client.CLIENT_ID.keyring, then use the--keyringoption to specify the path to the user keyring, for example:Example
ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs
[root@client01 ~]# ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse the
-roption to instruct the client to treat that path as its root:Syntax
ceph-fuse -n client.CLIENT_ID MOUNT_POINT -r PATH
ceph-fuse -n client.CLIENT_ID MOUNT_POINT -r PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs
[root@client01 ~]# ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you want to automatically reconnect an evicted Ceph client, then add the
--client_reconnect_stale=trueoption.Example
ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true
[root@client01 ~]# ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the file system is successfully mounted:
Syntax
stat -f MOUNT_POINT
stat -f MOUNT_POINTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
stat -f /mnt/cephfs
[root@client01 ~]# stat -f /mnt/cephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Automatically Mounting
On the client node, create a directory for the mount point:
Syntax
mkdir PATH_TO_MOUNT_POINT
mkdir PATH_TO_MOUNT_POINTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /mnt/mycephfs
[root@client01 ~]# mkdir /mnt/mycephfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you used the
pathoption with MDS capabilities, then the mount point must be within what is specified by thepath.Edit the
/etc/fstabfile as follows:Syntax
#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME:PORT, MOUNT_POINT fuse.ceph ceph.id=CLIENT_ID, 0 0 HOST_NAME:PORT, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, HOST_NAME:PORT:/ ceph.client_fs=FILE_SYSTEM_NAME,ceph.name=USERNAME,ceph.keyring=/etc/ceph/KEYRING_FILE, [ADDITIONAL_OPTIONS]#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME:PORT, MOUNT_POINT fuse.ceph ceph.id=CLIENT_ID, 0 0 HOST_NAME:PORT, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, HOST_NAME:PORT:/ ceph.client_fs=FILE_SYSTEM_NAME,ceph.name=USERNAME,ceph.keyring=/etc/ceph/KEYRING_FILE, [ADDITIONAL_OPTIONS]Copy to Clipboard Copied! Toggle word wrap Toggle overflow The first column sets the Ceph Monitor host names and the port number.
The second column sets the mount point
The third column sets the file system type, in this case,
fuse.ceph, for CephFS.The fourth column sets the various options, such as the user name and the keyring using the
ceph.nameandceph.keyringoptions. You can also set specific volumes, sub-volume groups, and sub-volumes using theceph.client_mountpointoption. To specify which Ceph File System to access, use theceph.client_fsoption. Set the_netdevoption to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting thenoatimeoption can increase performance. If you want to automatically reconnect after an eviction, then set theclient_reconnect_stale=trueoption.Set the fifth and sixth columns to zero.
Example
#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaultsCopy to Clipboard Copied! Toggle word wrap Toggle overflow The Ceph File System will be mounted on the next system boot.