Chapter 3. Deployment of the Ceph File System
As a storage administrator, you can deploy Ceph File Systems (CephFS) in a storage environment and have clients mount those Ceph File Systems to meet the storage needs.
Basically, the deployment workflow is three steps:
- Create a Ceph File System on a Ceph Monitor node.
- Create a Ceph client user with the appropriate capabilities, and make the client key available on the node where the Ceph File System will be mounted.
- Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space (FUSE) client.
3.1. Prerequisites Copy linkLink copied to clipboard!
- A running, and healthy Red Hat Ceph Storage cluster.
-
Installation and configuration of the Ceph Metadata Server daemon (
ceph-mds
).
3.2. Layout, quota, snapshot, and network restrictions Copy linkLink copied to clipboard!
These user capabilities can help you restrict access to a Ceph File System (CephFS) based on the needed requirements.
All user capability flags, except rw
, must be specified in alphabetical order.
Layouts and Quotas
When using layouts or quotas, clients require the p
flag, in addition to rw
capabilities. Setting the p
flag restricts all the attributes being set by special extended attributes, those with a ceph.
prefix. Also, this restricts other means of setting these fields, such as openc
operations with layouts.
Example
In this example, client.0
can modify layouts and quotas on the file system cephfs_a
, but client.1
cannot.
Snapshots
When creating or deleting snapshots, clients require the s
flag, in addition to rw
capabilities. When the capability string also contains the p
flag, the s
flag must appear after it.
Example
client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow rw, allow rws path=/temp
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
In this example, client.0
can create or delete snapshots in the temp
directory of file system cephfs_a
.
Network
Restricting clients connecting from a particular network.
Example
client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8
caps: [mon] allow r network 10.0.0.0/8
caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8
The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16
.
Additional Resources
- See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities.
3.3. Creating a Ceph File System Copy linkLink copied to clipboard!
You can create a Ceph File System (CephFS) on a Ceph Monitor node.
By default, you can create only one CephFS per Ceph Storage cluster.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
-
Installation and configuration of the Ceph Metadata Server daemon (
ceph-mds
). - Root-level access to a Ceph monitor node.
Procedure
Create two pools, one for storing data and one for storing metadata:
Syntax
ceph osd pool create NAME _PG_NUM
ceph osd pool create NAME _PG_NUM
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64
[root@mon ~]# ceph osd pool create cephfs_data 64 [root@mon ~]# ceph osd pool create cephfs_metadata 64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it will generally have far fewer objects than the data pool. It is possible to increase the number of PGs if needed. Recommended metadata pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system.
ImportantFor the metadata pool, consider to use:
- A higher replication level because any data loss to this pool can make the whole file system inaccessible.
- Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients.
Create the CephFS:
Syntax
ceph fs new NAME METADATA_POOL DATA_POOL
ceph fs new NAME METADATA_POOL DATA_POOL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs new cephfs cephfs_metadata cephfs_data
[root@mon ~]# ceph fs new cephfs cephfs_metadata cephfs_data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that one or more MDSs enter to the active state based on you configuration.
Syntax
ceph fs status NAME
ceph fs status NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Enabling the Red Hat Ceph Storage Repositories section in Red Hat Ceph Storage Installation Guide for more details.
- See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more details.
- See the The Ceph File System section in the Red Hat Ceph Storage File System Guide for more details on the Ceph File System limitations.
- See the Red Hat Ceph Storage Installation Guide for details on installing Red Hat Ceph Storage.
- See the Installing Metadata Servers in the Red Hat Ceph Storage Installation Guide for details.
3.4. Creating Ceph File Systems with erasure coding (Technology Preview) Copy linkLink copied to clipboard!
By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools.
The Ceph File System using erasure-coded pools is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
For production environments, Red Hat recommends using a replicated pool as the default data pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- A running CephFS environment.
- Pools using BlueStore OSDs.
- User-level access to a Ceph Monitor node.
Procedure
Create a replicated metadata pool for CephFS metadata:
Syntax
ceph osd pool create METADATA_POOL PG_NUM
ceph osd pool create METADATA_POOL PG_NUM
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create cephfs-metadata 64
[root@mon ~]# ceph osd pool create cephfs-metadata 64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example creates a pool named
cephfs-metadata
with 64 placement groups.Create a default replicated data pool for CephFS:
Syntax
ceph osd pool create DATA_POOL PG_NUM
ceph osd pool create DATA_POOL PG_NUM
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create cephfs-data 64
[root@mon ~]# ceph osd pool create cephfs-data 64
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example creates a replicated pool named
cephfs-data
with 64 placement groups.Create an erasure-coded data pool for CephFS:
Syntax
ceph osd pool create DATA_POOL PG_NUM erasure
ceph osd pool create DATA_POOL PG_NUM erasure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create cephfs-data-ec 64 erasure
[root@mon ~]# ceph osd pool create cephfs-data-ec 64 erasure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example creates an erasure-coded pool named
cephfs-data-ec
with 64 placement groups.Enable overwrites on the erasure-coded pool:
Syntax
ceph osd pool set DATA_POOL allow_ec_overwrites true
ceph osd pool set DATA_POOL allow_ec_overwrites true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool set cephfs-data-ec allow_ec_overwrites true
[root@mon ~]# ceph osd pool set cephfs-data-ec allow_ec_overwrites true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example enables overwrites on an erasure-coded pool named
cephfs-data-ec
.Add the erasure-coded data pool to the CephFS Metadata Server (MDS):
Syntax
ceph fs add_data_pool cephfs-ec DATA_POOL
ceph fs add_data_pool cephfs-ec DATA_POOL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs add_data_pool cephfs-ec cephfs-data-ec
[root@mon ~]# ceph fs add_data_pool cephfs-ec cephfs-data-ec
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optionally, verify the data pool was added:
ceph fs ls
[root@mon ~]# ceph fs ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the CephFS:
Syntax
ceph fs new cephfs METADATA_POOL DATA_POOL
ceph fs new cephfs METADATA_POOL DATA_POOL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs new cephfs cephfs-metadata cephfs-data
[root@mon ~]# ceph fs new cephfs cephfs-metadata cephfs-data
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantUsing an erasure-coded pool for the default data pool is not recommended.
Create the CephFS using erasure coding:
Syntax
ceph fs new cephfs-ec METADATA_POOL DATA_POOL
ceph fs new cephfs-ec METADATA_POOL DATA_POOL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec
[root@mon ~]# ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that one or more Ceph FS Metadata Servers (MDS) enters the active state:
Syntax
ceph fs status FS_EC
ceph fs status FS_EC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To add a new erasure-coded data pool to an existing file system.
Create an erasure-coded data pool for CephFS:
Syntax
ceph osd pool create DATA_POOL PG_NUM erasure
ceph osd pool create DATA_POOL PG_NUM erasure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool create cephfs-data-ec1 64 erasure
[root@mon ~]# ceph osd pool create cephfs-data-ec1 64 erasure
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable overwrites on the erasure-coded pool:
Syntax
ceph osd pool set DATA_POOL allow_ec_overwrites true
ceph osd pool set DATA_POOL allow_ec_overwrites true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph osd pool set cephfs-data-ec1 allow_ec_overwrites true
[root@mon ~]# ceph osd pool set cephfs-data-ec1 allow_ec_overwrites true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the erasure-coded data pool to the CephFS Metadata Server (MDS):
Syntax
ceph fs add_data_pool cephfs-ec DATA_POOL
ceph fs add_data_pool cephfs-ec DATA_POOL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs add_data_pool cephfs-ec cephfs-data-ec1
[root@mon ~]# ceph fs add_data_pool cephfs-ec cephfs-data-ec1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create the CephFS using erasure coding:
Syntax
ceph fs new cephfs-ec METADATA_POOL DATA_POOL
ceph fs new cephfs-ec METADATA_POOL DATA_POOL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec1
[root@mon ~]# ceph fs new cephfs-ec cephfs-metadata cephfs-data-ec1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide for more information on the CephFS MDS.
- See the Installing Metadata Servers section of the Red Hat Ceph Storage Installation Guide for details on installing CephFS.
- See the Erasure-Coded Pools section in the Red Hat Ceph Storage Storage Strategies Guide for more information.
- See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information.
3.5. Creating client users for a Ceph File System Copy linkLink copied to clipboard!
Red Hat Ceph Storage uses cephx
for authentication, which is enabled by default. To use cephx
with the Ceph File System, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System will be mounted.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Installation and configuration of the Ceph Metadata Server daemon (ceph-mds).
- Root-level access to a Ceph monitor node.
- Root-level access to a Ceph client node.
Procedure
On a Ceph Monitor node, create a client user:
Syntax
ceph fs authorize FILE_SYSTEM_NAME client.CLIENT_NAME /DIRECTORY CAPABILITY [/DIRECTORY CAPABILITY] ...
ceph fs authorize FILE_SYSTEM_NAME client.CLIENT_NAME /DIRECTORY CAPABILITY [/DIRECTORY CAPABILITY] ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To restrict the client to only writing in the
temp
directory of filesystemcephfs_a
:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To completely restrict the client to the
temp
directory, remove the root (/
) directory:Example
ceph fs authorize cephfs_a client.1 /temp rw
[root@mon ~]# ceph fs authorize cephfs_a client.1 /temp rw
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
NoteSupplying
all
or asterisk as the file system name grants access to every file system. Typically, it is necessary to quote the asterisk to protect it from the shell.Verify the created key:
Syntax
ceph auth get client.ID
ceph auth get client.ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph auth get client.1
[root@mon ~]# ceph auth get client.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the keyring to the client.
On the Ceph Monitor node, export the keyring to a file:
Syntax
ceph auth get client.ID -o ceph.client.ID.keyring
ceph auth get client.ID -o ceph.client.ID.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph auth get client.1 -o ceph.client.1.keyring
[root@mon ~]# ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the client keyring from the Ceph Monitor node to the
/etc/ceph/
directory on the client node:Syntax
scp root@MONITOR_NODE_NAME:/root/ceph.client.1.keyring /etc/ceph/
scp root@MONITOR_NODE_NAME:/root/ceph.client.1.keyring /etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace_MONITOR_NODE_NAME_with the Ceph Monitor node name or IP.
Example
scp root@mon:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring
[root@client ~]# scp root@mon:/root/ceph.client.1.keyring /etc/ceph/ceph.client.1.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the appropriate permissions for the keyring file:
Syntax
chmod 644 KEYRING
chmod 644 KEYRING
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
chmod 644 /etc/ceph/ceph.client.1.keyring
[root@client ~]# chmod 644 /etc/ceph/ceph.client.1.keyring
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the User Management chapter in the Red Hat Ceph Storage Administration Guide for more details.
3.6. Mounting the Ceph File System as a kernel client Copy linkLink copied to clipboard!
You can mount the Ceph File System (CephFS) as a kernel client, either manually or automatically on system boot.
Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, Red Hat will address them. If the cause is found to be on the client side, then the issue will have to be addressed by the kernel vendor of the Linux distribution.
Prerequisites
- Root-level access to a Linux-based client node.
- User-level access to a Ceph Monitor node.
- An existing Ceph File System.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage 4 Tools repository:
Red Hat Enterprise Linux 7
subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
[root@client ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-common
package:Red Hat Enterprise Linux 7
yum install ceph-common
[root@client ~]# yum install ceph-common
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
dnf install ceph-common
[root@client ~]# dnf install ceph-common
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate permissions for the configuration file:
chmod 644 /etc/ceph/ceph.conf
[root@client ~]# chmod 644 /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a mount directory on the client node:
Syntax
mkdir -p MOUNT_POINT
mkdir -p MOUNT_POINT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir -p /mnt/cephfs
[root@client]# mkdir -p /mnt/cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with commas in the
mount
command, specify the mount point, and set the client name:NoteAs of Red Hat Ceph Storage 4.1,
mount.ceph
can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID withname=CLIENT_ID
, andmount.ceph
will find the right keyring file.Syntax
mount -t ceph MONITOR-1_NAME:6789,MONITOR-2_NAME:6789,MONITOR-3_NAME:6789:/ MOUNT_POINT -o name=CLIENT_ID
mount -t ceph MONITOR-1_NAME:6789,MONITOR-2_NAME:6789,MONITOR-3_NAME:6789:/ MOUNT_POINT -o name=CLIENT_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1
[root@client ~]# mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can configure a DNS server so that a single host name resolves to multiple IP addresses. Then you can use that single host name with the
mount
command, instead of supplying a comma-separated list.NoteYou can also replace the Monitor host names with the string
:/
andmount.ceph
will read the Ceph configuration file to determine which Monitors to connect to.Verify that the file system is successfully mounted:
Syntax
stat -f MOUNT_POINT
stat -f MOUNT_POINT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
stat -f /mnt/cephfs
[root@client ~]# stat -f /mnt/cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
-
See the
mount(8)
manual page. - See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user.
- See the Creating a Ceph File System section of the Red Hat Ceph Storage File System Guide for details.
3.7. Mounting the Ceph File System as a FUSE client Copy linkLink copied to clipboard!
You can mount the Ceph File System (CephFS) as a File System in User Space (FUSE) client, either manually or automatically on system boot.
Prerequisites
- Root-level access to a Linux-based client node.
- User-level access to a Ceph Monitor node.
- An existing Ceph File System.
Procedure
Configure the client node to use the Ceph storage cluster.
Enable the Red Hat Ceph Storage 4 Tools repository:
Red Hat Enterprise Linux 7
subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
[root@client ~]# subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
[root@client ~]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
ceph-fuse
package:Red Hat Enterprise Linux 7
yum install ceph-fuse
[root@client ~]# yum install ceph-fuse
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
dnf install ceph-fuse
[root@client ~]# dnf install ceph-fuse
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/
scp root@MONITOR_NODE_NAME:/etc/ceph/KEYRING_FILE /etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.client.1.keyring /etc/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp root@MONITOR_NODE_NAME:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address.
Example
scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
[root@client ~]# scp root@192.168.0.1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the appropriate permissions for the configuration file:
chmod 644 /etc/ceph/ceph.conf
[root@client ~]# chmod 644 /etc/ceph/ceph.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Choose either automatically or manually mounting.
Manually Mounting
On the client node, create a directory for the mount point:
Syntax
mkdir PATH_TO_MOUNT_POINT
mkdir PATH_TO_MOUNT_POINT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /mnt/mycephfs
[root@client ~]# mkdir /mnt/mycephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you used the
path
option with MDS capabilities, then the mount point must be within what is specified bypath
.Use the
ceph-fuse
utility to mount the Ceph File System.Syntax
ceph-fuse -n client.CLIENT_ID MOUNT_POINT
ceph-fuse -n client.CLIENT_ID MOUNT_POINT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-fuse -n client.1 /mnt/mycephfs
[root@client ~]# ceph-fuse -n client.1 /mnt/mycephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you do not use the default name and location of the user keyring, that is
/etc/ceph/ceph.client.CLIENT_ID.keyring
, then use the--keyring
option to specify the path to the user keyring, for example:Example
ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs
[root@client ~]# ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteUse the
-r
option to instruct the client to treat that path as its root:Syntax
ceph-fuse -n client.CLIENT_ID MOUNT_POINT -r PATH
ceph-fuse -n client.CLIENT_ID MOUNT_POINT -r PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs
[root@client ~]# ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the file system is successfully mounted:
Syntax
stat -f MOUNT_POINT
stat -f MOUNT_POINT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
stat -f /mnt/cephfs
[user@client ~]$ stat -f /mnt/cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Automatically Mounting
On the client node, create a directory for the mount point:
Syntax
mkdir PATH_TO_MOUNT_POINT
mkdir PATH_TO_MOUNT_POINT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
mkdir /mnt/mycephfs
[root@client ~]# mkdir /mnt/mycephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you used the
path
option with MDS capabilities, then the mount point must be within what is specified bypath
.Edit the
/etc/fstab
file as follows:Syntax
#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME:_PORT_, MOUNT_POINT fuse.ceph ceph.id=CLIENT_ID, 0 0 HOST_NAME:_PORT_, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, HOST_NAME:_PORT_:/ [ADDITIONAL_OPTIONS]
#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME:_PORT_, MOUNT_POINT fuse.ceph ceph.id=CLIENT_ID, 0 0 HOST_NAME:_PORT_, ceph.client_mountpoint=/VOL/SUB_VOL_GROUP/SUB_VOL/UID_SUB_VOL, HOST_NAME:_PORT_:/ [ADDITIONAL_OPTIONS]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The first column sets the Ceph Monitor host names and the port number.
The second column sets the mount point
The third column sets the file system type, in this case,
fuse.ceph
, for CephFS.The fourth column sets the various options, such as, the user name and the secret file using the
name
andsecretfile
options, respectively. You can also set specific volumes, sub-volume groups, and sub-volumes using theceph.client_mountpoint
option. Set the_netdev
option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting thenoatime
option can increase performance.Set the fifth and sixth columns to zero.
Example
#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ _netdev,defaults
#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ _netdev,defaults
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The Ceph File System will be mounted on the next system boot.
Additional Resources
-
The
ceph-fuse(8)
manual page. - See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user.
- See the Creating a Ceph File System section of the Red Hat Ceph Storage File System Guide for details.
3.8. Additional Resources Copy linkLink copied to clipboard!
- See the Section 3.3, “Creating a Ceph File System” for details.
- See the Section 3.5, “Creating client users for a Ceph File System” for details.
- See the Section 3.6, “Mounting the Ceph File System as a kernel client” for details.
- See the Section 3.7, “Mounting the Ceph File System as a FUSE client” for details.
- See the Red Hat Ceph Storage Installation Guide for details on installing the CephFS Metadata Server.
- See the Chapter 2, The Ceph File System Metadata Server for details on configuring the CephFS Metadata Server daemon.