Chapter 3. Deployment of the Ceph File System
As a storage administrator, you can deploy Ceph File Systems (CephFS) in a storage environment and have clients mount those Ceph File Systems to meet the storage needs.
Basically, the deployment workflow is three steps:
- Create Ceph File Systems on a Ceph Monitor node.
- Create a Ceph client user with the appropriate capabilities, and make the client key available on the node where the Ceph File System will be mounted.
- Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space (FUSE) client.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
-
Installation and configuration of the Ceph Metadata Server daemon (
ceph-mds
).
3.1. Layout, quota, snapshot, and network restrictions
These user capabilities can help you restrict access to a Ceph File System (CephFS) based on the needed requirements.
All user capability flags, except rw
, must be specified in alphabetical order.
Layouts and Quotas
When using layouts or quotas, clients require the p
flag, in addition to rw
capabilities. Setting the p
flag restricts all the attributes being set by special extended attributes, those with a ceph.
prefix. Also, this restricts other means of setting these fields, such as openc
operations with layouts.
Example
client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a
In this example, client.0
can modify layouts and quotas on the file system cephfs_a
, but client.1
cannot.
Snapshots
When creating or deleting snapshots, clients require the s
flag, in addition to rw
capabilities. When the capability string also contains the p
flag, the s
flag must appear after it.
Example
client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a
In this example, client.0
can create or delete snapshots in the temp
directory of file system cephfs_a
.
Network
Restricting clients connecting from a particular network.
Example
client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8
The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16
.
Additional Resources
- See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities.
3.2. Creating Ceph File Systems
You can create multiple Ceph File Systems (CephFS) on a Ceph Monitor node.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
-
Installation and configuration of the Ceph Metadata Server daemon (
ceph-mds
). - Root-level access to a Ceph Monitor node.
- Root-level access to a Ceph client node.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage Tools repository:
Red Hat Enterprise Linux 8
[root@client01 ~]# subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms
Red Hat Enterprise Linux 9
[root@client01 ~]# subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms
Install the
ceph-fuse
package:[root@client ~]# dnf install ceph-fuse
Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/
Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/
Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Set the appropriate permissions for the configuration file:
[root@client ~]# chmod 644 /etc/ceph/ceph.conf
Create a Ceph File System:
Syntax
ceph fs volume create FILE_SYSTEM_NAME
Example
[root@mon ~]# ceph fs volume create cephfs01
Repeat this step to create additional file systems.
NoteBy running this command, Ceph automatically creates the new pools, and deploys a new Ceph Metadata Server (MDS) daemon to support the new file system. This also configures the MDS affinity accordingly.
Verify access to the new Ceph File System from a Ceph client.
Authorize a Ceph client to access the new file system:
Syntax
ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS
Example
[root@mon ~]# ceph fs authorize cephfs01 client.1 / rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== [root@mon ~]# ceph auth get client.1 exported keyring for client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = "allow rw fsname=cephfs01" caps mon = "allow r fsname=cephfs01" caps osd = "allow rw tag cephfs data=cephfs01"
NoteOptionally, you can add a safety measure by specifying the
root_squash
option. This prevents accidental deletion scenarios by disallowing clients with auid=0
orgid=0
to do write operations, but still allows read operations.Example
[root@mon ~]# ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== [root@mon ~]# ceph auth get client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = "allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes" caps mon = "allow r fsname=cephfs01" caps osd = "allow rw tag cephfs data=cephfs01"
In this example,
root_squash
is enabled for the file systemcephfs01
, except within the/volumes
directory tree.ImportantThe Ceph client can only see the CephFS it is authorized for.
Copy the Ceph user’s keyring to the Ceph client node:
Syntax
ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME:/etc/ceph
Example
[root@mon ~]# ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 [root@mon ~]# scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00
On the Ceph client node, create a new directory:
Syntax
mkdir PATH_TO_NEW_DIRECTORY_NAME
Example
[root@client ~]# mkdir /mnt/mycephfs
On the Ceph client node, mount the new Ceph File System:
Syntax
ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME
Example
[root@client ~]# ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse
- On the Ceph client node, list the directory contents of the new mount point, or create a file on the new mount point.
Additional Resources
- See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details.
- See the Mounting the Ceph File System as a kernel client section in the Red Hat Ceph Storage File System Guide for more details.
- See the Mounting the Ceph File System as a FUSE client section in the Red Hat Ceph Storage File System Guide for more details.
- See Ceph File System limitations and the POSIX standards section in the Red Hat Ceph Storage File System Guide for more details.
- See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more details.
3.3. Adding an erasure-coded pool to a Ceph File System
By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool to the Ceph File System, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools.
CephFS EC pools are for archival purpose only.
For production environments, Red Hat recommends using the default replicated data pool for CephFS. The creation of inodes in CephFS creates at least one object in the default data pool. It is better to use a replicated pool for the default data to improve small-object write performance, and to improve read performance for updating backtraces.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- An existing Ceph File System.
- Pools using BlueStore OSDs.
- Root-level access to a Ceph Monitor node.
-
Installation of the
attr
package.
Procedure
Create an erasure-coded data pool for CephFS:
Syntax
ceph osd pool create DATA_POOL_NAME erasure
Example
[root@mon ~]# ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' created
Verify the pool was added:
Example
[root@mon ~]# ceph osd lspools
Enable overwrites on the erasure-coded pool:
Syntax
ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true
Example
[root@mon ~]# ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to true
Verify the status of the Ceph File System:
Syntax
ceph fs status FILE_SYSTEM_NAME
Example
[root@mon ~]# ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj
Add the erasure-coded data pool to the existing CephFS:
Syntax
ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME
Example
[root@mon ~]# ceph fs add_data_pool cephfs-ec cephfs-data-ec01
This example adds the new data pool,
cephfs-data-ec01
, to the existing erasure-coded file system,cephfs-ec
.Verify that the erasure-coded pool was added to the Ceph File System:
Syntax
ceph fs status FILE_SYSTEM_NAME
Example
[root@mon ~]# ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T cephfs-data-ec01 data 0 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj
Set the file layout on a new directory:
Syntax
mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY
Example
[root@mon ~]# mkdir /mnt/cephfs/newdir [root@mon ~]# setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir
In this example, all new files created in the
/mnt/cephfs/newdir
directory inherit the directory layout and places the data in the newly added erasure-coded pool.
Additional Resources
- See The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide for more information about CephFS MDS.
- See the Creating Ceph File Systems section in the Red Hat Ceph Storage File System Guide for more information.
- See the Erasure Code Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more information.
- See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information.
3.4. Creating client users for a Ceph File System
Red Hat Ceph Storage uses cephx
for authentication, which is enabled by default. To use cephx
with the Ceph File System, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System will be mounted.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Installation and configuration of the Ceph Metadata Server daemon (ceph-mds).
- Root-level access to a Ceph Monitor node.
- Root-level access to a Ceph client node.
Procedure
Log into the Cephadm shell on the monitor node:
Example
[root@host01 ~]# cephadm shell
On a Ceph Monitor node, create a client user:
Syntax
ceph fs authorize FILE_SYSTEM_NAME client.CLIENT_NAME /DIRECTORY CAPABILITY [/DIRECTORY CAPABILITY] PERMISSIONS ...
To restrict the client to only writing in the
temp
directory of filesystemcephfs_a
:Example
[ceph: root@host01 /]# ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
To completely restrict the client to the
temp
directory, remove the root (/
) directory:Example
[ceph: root@host01 /]# ceph fs authorize cephfs_a client.1 /temp rw
NoteSupplying
all
or asterisk as the file system name grants access to every file system. Typically, it is necessary to quote the asterisk to protect it from the shell.Verify the created key:
Syntax
ceph auth get client.ID
Example
[ceph: root@host01 /]# ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = "allow r, allow rw path=/temp" caps mon = "allow r" caps osd = "allow rw tag cephfs data=cephfs_a"
Copy the keyring to the client.
On the Ceph Monitor node, export the keyring to a file:
Syntax
ceph auth get client.ID -o ceph.client.ID.keyring
Example
[ceph: root@host01 /]# ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1
Copy the client keyring from the Ceph Monitor node to the
/etc/ceph/
directory on the client node:Syntax
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyring
Replace CLIENT_NODE_NAME with the Ceph client node name or IP.
Example
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring
From the client node, set the appropriate permissions for the keyring file:
Syntax
chmod 644 ceph.client.ID.keyring
Example
[root@client01 ~]# chmod 644 /etc/ceph/ceph.client.1.keyring
Additional Resources
- See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details.
3.5. Mounting the Ceph File System as a kernel client
You can mount the Ceph File System (CephFS) as a kernel client, either manually or automatically on system boot.
Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, Red Hat will address them. If the cause is found to be on the client side, then the issue will have to be addressed by the kernel vendor of the Linux distribution.
Prerequisites
- Root-level access to a Linux-based client node.
- Root-level access to a Ceph Monitor node.
- An existing Ceph File System.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage 7 Tools repository:
Red Hat Enterprise Linux 9
[root@client01 ~]# subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms
Install the
ceph-common
package:[root@client01 ~]# dnf install ceph-common
Log into the Cephadm shell on the monitor node:
Example
[root@host01 ~]# cephadm shell
Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyring
Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring
Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp /etc/ceph/ceph.conf root@CLIENT_NODE_NAME:/etc/ceph/ceph.conf
Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf
From the client node, set the appropriate permissions for the configuration file:
[root@client01 ~]# chmod 644 /etc/ceph/ceph.conf
- Choose either automatically or manually mounting.
Manually Mounting
Create a mount directory on the client node:
Syntax
mkdir -p MOUNT_POINT
Example
[root@client01 ~]# mkdir -p /mnt/cephfs
Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with commas in the
mount
command, specify the mount point, and set the client name:NoteAs of Red Hat Ceph Storage 4.1,
mount.ceph
can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID withname=CLIENT_ID
, andmount.ceph
will find the right keyring file.Syntax
mount -t ceph MONITOR-1_NAME:6789,MONITOR-2_NAME:6789,MONITOR-3_NAME:6789:/ MOUNT_POINT -o name=CLIENT_ID,fs=FILE_SYSTEM_NAME
Example
[root@client01 ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01
NoteYou can configure a DNS server so that a single host name resolves to multiple IP addresses. Then you can use that single host name with the
mount
command, instead of supplying a comma-separated list.NoteYou can also replace the Monitor host names with the string
:/
andmount.ceph
will read the Ceph configuration file to determine which Monitors to connect to.NoteYou can set the
nowsync
option to asynchronously execute file creation and removal on the Red Hat Ceph Storage clusters. This improves the performance of some workloads by avoiding round-trip latency for these system calls without impacting consistency. Thenowsync
option requires kernel clients with Red Hat Enterprise Linux 9.0 or later.Example
[root@client01 ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01
Verify that the file system is successfully mounted:
Syntax
stat -f MOUNT_POINT
Example
[root@client01 ~]# stat -f /mnt/cephfs
Automatically Mounting
On the client host, create a new directory for mounting the Ceph File System.
Syntax
mkdir -p MOUNT_POINT
Example
[root@client01 ~]# mkdir -p /mnt/cephfs
Edit the
/etc/fstab
file as follows:Syntax
#DEVICE PATH TYPE OPTIONS MON_0_HOST:PORT, MOUNT_POINT ceph name=CLIENT_ID, MON_1_HOST:PORT, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, fs=FILE_SYSTEM_NAME, MON_2_HOST:PORT:/q[_VOL_]/SUB_VOL/UID_SUB_VOL, [ADDITIONAL_OPTIONS]
The first column sets the Ceph Monitor host names and the port number.
The second column sets the mount point
The third column sets the file system type, in this case,
ceph
, for CephFS.The fourth column sets the various options, such as, the user name and the secret file using the
name
andsecretfile
options. You can also set specific volumes, sub-volume groups, and sub-volumes using theceph.client_mountpoint
option.Set the
_netdev
option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting thenoatime
option can increase performance.Set the fifth and sixth columns to zero.
Example
#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime
The Ceph File System will be mounted on the next system boot.
NoteAs of Red Hat Ceph Storage 4.1,
mount.ceph
can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID withname=CLIENT_ID
, andmount.ceph
will find the right keyring file.NoteYou can also replace the Monitor host names with the string
:/
andmount.ceph
will read the Ceph configuration file to determine which Monitors to connect to.
Additional Resources
-
See the
mount(8)
manual page. - See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user.
- See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details.
3.6. Mounting the Ceph File System as a FUSE client
You can mount the Ceph File System (CephFS) as a File System in User Space (FUSE) client, either manually or automatically on system boot.
Prerequisites
- Root-level access to a Linux-based client node.
- Root-level access to a Ceph Monitor node.
- An existing Ceph File System.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage 7 Tools repository:
Red Hat Enterprise Linux 8
[root@client01 ~]# subscription-manager repos --enable=6-tools-for-rhel-8-x86_64-rpms
Red Hat Enterprise Linux 9
[root@client01 ~]# subscription-manager repos --enable=6-tools-for-rhel-9-x86_64-rpms
Install the
ceph-fuse
package:[root@client01 ~]# dnf install ceph-fuse
Log into the Cephadm shell on the monitor node:
Example
[root@host01 ~]# cephadm shell
Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp /ceph.client.ID.keyring root@CLIENT_NODE_NAME:/etc/ceph/ceph.client.ID.keyring
Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring
Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp /etc/ceph/ceph.conf root@CLIENT_NODE_NAME:/etc/ceph/ceph.conf
Replace CLIENT_NODE_NAME with the Ceph client host name or IP address.
Example
[ceph: root@host01 /]# scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf
From the client node, set the appropriate permissions for the configuration file:
[root@client01 ~]# chmod 644 /etc/ceph/ceph.conf
- Choose either automatically or manually mounting.
Manually Mounting
On the client node, create a directory for the mount point:
Syntax
mkdir PATH_TO_MOUNT_POINT
Example
[root@client01 ~]# mkdir /mnt/mycephfs
NoteIf you used the
path
option with MDS capabilities, then the mount point must be within what is specified by thepath
.Use the
ceph-fuse
utility to mount the Ceph File System.Syntax
ceph-fuse -n client.CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT
Example
[root@client01 ~]# ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs
NoteIf you do not use the default name and location of the user keyring, that is
/etc/ceph/ceph.client.CLIENT_ID.keyring
, then use the--keyring
option to specify the path to the user keyring, for example:Example
[root@client01 ~]# ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs
NoteUse the
-r
option to instruct the client to treat that path as its root:Syntax
ceph-fuse -n client.CLIENT_ID MOUNT_POINT -r PATH
Example
[root@client01 ~]# ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs
NoteIf you want to automatically reconnect an evicted Ceph client, then add the
--client_reconnect_stale=true
option.Example
[root@client01 ~]# ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true
Verify that the file system is successfully mounted:
Syntax
stat -f MOUNT_POINT
Example
[root@client01 ~]# stat -f /mnt/cephfs
Automatically Mounting
On the client node, create a directory for the mount point:
Syntax
mkdir PATH_TO_MOUNT_POINT
Example
[root@client01 ~]# mkdir /mnt/mycephfs
NoteIf you used the
path
option with MDS capabilities, then the mount point must be within what is specified by thepath
.Edit the
/etc/fstab
file as follows:Syntax
#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME:PORT, MOUNT_POINT fuse.ceph ceph.id=CLIENT_ID, 0 0 HOST_NAME:PORT, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, HOST_NAME:PORT:/ ceph.client_fs=FILE_SYSTEM_NAME,ceph.name=USERNAME,ceph.keyring=/etc/ceph/KEYRING_FILE, [ADDITIONAL_OPTIONS]
The first column sets the Ceph Monitor host names and the port number.
The second column sets the mount point
The third column sets the file system type, in this case,
fuse.ceph
, for CephFS.The fourth column sets the various options, such as the user name and the keyring using the
ceph.name
andceph.keyring
options. You can also set specific volumes, sub-volume groups, and sub-volumes using theceph.client_mountpoint
option. To specify which Ceph File System to access, use theceph.client_fs
option. Set the_netdev
option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting thenoatime
option can increase performance. If you want to automatically reconnect after an eviction, then set theclient_reconnect_stale=true
option.Set the fifth and sixth columns to zero.
Example
#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults
The Ceph File System will be mounted on the next system boot.
Additional Resources
-
The
ceph-fuse(8)
manual page. - See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user.
- See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details.
Additional Resources
- See Section 2.5, “Management of MDS service using the Ceph Orchestrator” to install Ceph Metadata servers.
- See Section 3.2, “Creating Ceph File Systems” for details.
- See Section 3.4, “Creating client users for a Ceph File System” for details.
- See Section 3.5, “Mounting the Ceph File System as a kernel client” for details.
- See Section 3.6, “Mounting the Ceph File System as a FUSE client” for details.
- See Chapter 2, The Ceph File System Metadata Server for details on configuring the CephFS Metadata Server daemon.