Este contenido no está disponible en el idioma seleccionado.
Chapter 6. NFS cluster and export management
As a storage administrator, you can create an NFS cluster, customize it, and export Ceph File System namespace over the NFS protocol.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
-
Installation and configuration of the Ceph Metadata Server daemons (
ceph-mds
). - Create and mount a Ceph File System.
6.1. Creating an NFS cluster
Create an NFS cluster with the nfs cluster create
command. This creates a common recovery pool for all NFS Ganesha daemons, new user based on the cluster name, and a common NFS Ganesha configuration RADOS object.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
- An existing Ceph File System.
- Root-level access to Ceph Monitor.
-
Installation of the
nfs-ganesha
,nfs-ganesha-ceph
,nfs-ganesha-rados-grace
, andnfs-ganesha-rados-urls
packages on the Ceph Manager hosts. - Root-level access to the client.
Procedure
Log into the Cephadm shell:
Example
[root@mds ~]# cephadm shell
Enable the Ceph Manager NFS module:
Example
[ceph: root@host01 /]# ceph mgr module enable nfs
Create an NFS Ganesha cluster:
Syntax
ceph nfs cluster create CLUSTER_NAME [PLACEMENT] [--ingress] [--virtual_ip IP_ADDRESS] [--ingress-mode {default|keepalive-only|haproxy-standard|haproxy-protocol}] [--port PORT]
Example
[ceph: root@host01 /]# ceph nfs cluster create nfs-cephfs "host01 host02" NFS Cluster Created Successfully
In this example, the NFS Ganesha cluster name is
nfs-cephfs
and the daemon containers are deployed tohost01
, andhost02
.ImportantRed Hat only supports one NFS Ganesha daemon running per host.
Verify the NFS Ganesha cluster information:
Syntax
ceph nfs cluster info [CLUSTER_NAME]
Example
[ceph: root@host01 /]# ceph nfs cluster info nfs-cephfs { "nfs-cephfs": [ { "hostname": "host01", "ip": "10.74.179.124", "port": 2049 }, { "hostname": "host02", "ip": "10.74.180.160", "port": 2049 } ] }
NoteSpecifying the CLUSTER_NAME is optional.
6.2. Customizing an NFS configuration
Customize an NFS cluster with the configuration file. With this, the NFS cluster uses the specified configuration and has precedence over the default configuration blocks.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
- Root-level access to a Ceph Metadata Server (MDS) node.
-
An NFS cluster created using the
ceph nfs cluster create
command.
Procedure
Create a configuration file:
Example
[ceph: root@host01 /]# touch nfs-cephfs.conf
Enable logging in the configuration file with the following block:
Example
[ceph: root@host01 /]# vi nfs-cephfs.conf LOG { COMPONENTS { ALL = FULL_DEBUG; } }
Set the new configuration:
Syntax
ceph nfs cluster config set CLUSTER_NAME -i PATH_TO_CONFIG_FILE
Example
[ceph: root@host01 /]# ceph nfs cluster config set nfs-cephfs -i nfs-cephfs.conf NFS-Ganesha Config Set Successfully
View the customized NFS Ganesha configuration:
Syntax
ceph nfs cluster config get CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster config get nfs-cephfs LOG { COMPONENTS { ALL = FULL_DEBUG; } }
This provides the output, if any, of the user-defined configuration.
Optional: If you want to remove the user-defined configuration, run the following command:
Syntax
ceph nfs cluster config reset CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster config reset nfs-cephfs NFS-Ganesha Config Reset Successfully
6.3. Exporting Ceph File System namespaces over the NFS protocol (Limited Availability)
Ceph File Systems (CephFS) namespaces can be exported over the NFS protocol using a NFS Ganesha file server. To export a CephFS namespace, you must first have a running NFS Ganesha cluster.
This technology is Limited Availability. See the Deprecated functionality chapter for additional information.
Red Hat supports only NFS version 4.0 or higher.
NFS clients are unable to create CephFS snapshots through their native NFS mount. They must use server-side operator tooling for their snapshot needs.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
-
An NFS cluster created using the
ceph nfs cluster create
command.
Procedure
Create a CephFS export:
NoteDo not use the
cmount_path
option while creating NFS export. This is because there is a known issue wherein, ifcmount_path
with any other value apart from '/' is used, NFS exports which were defined earlier become inaccessible.Syntax
ceph nfs export create cephfs CLUSTER_NAME BINDING FILE_SYSTEM_NAME [--readonly] [--path=PATH_WITHIN_CEPHFS]
Example
[ceph: root@host01 /]# ceph nfs export create cephfs nfs-cephfs /ceph cephfs01 --path=/ { "bind": "/ceph", "cluster": "nfs-cephfs", "fs": "cephfs01", "mode": "RW", "path": "/" }
In this example, the BINDING (
/ceph
) is the pseudo root path, which must be unique, and an absolute path.NoteThe
--readonly
option exports a path with the read-only permission, the default being read, and write permissions.NoteThe PATH_WITHIN_CEPHFS can be a subvolume. You can get the absolute subvolume path by using the following command:
Syntax
ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME]
Example
[ceph: root@host01 /]# ceph fs subvolume getpath cephfs sub0
View the export block based on the pseudo root name:
Syntax
ceph nfs export get CLUSTER_NAME BINDING
Example
[ceph: root@host01 /]# ceph nfs export get nfs-cephfs /ceph { "export_id": 1, "path": "/", "cluster_id": "nfs-cephfs", "pseudo": "/ceph", "access_type": "RW", "squash": "no_root_squash", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "cephnfs11", "fs_name": "cephfs", "sec_label_xattr": "" }, "clients": [] }
List the NFS exports:
Syntax
ceph nfs export ls CLUSTER_NAME [--detailed]
Example
[ceph: root@host01 /]# ceph nfs export ls nfs-cephfs [ "/ceph/" ] [ceph: root@host01 /]# ceph nfs export ls nfs-cephfs --detailed [ { "export_id": 100, "path": "/", "cluster_id": "nfs-cephfs", "pseudo": "/ceph/", "access_type": "RW", "squash": "no_root_squash", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfstest01", "fs_name": "cephfs", "sec_label_xattr": "" }, "clients": [] } ]
Get the information of the NFS export:
Syntax
ceph nfs export info CLUSTER_NAME [PSEUDO_PATH]
Example
[ceph: root@host01 /]# ceph nfs export info nfs-cephfs /ceph { "export_id": 1, "path": "/", "cluster_id": "nfs-cephfs", "pseudo": "/ceph", "access_type": "RW", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfs.nfs-cephfs.1", "fs_name": "cephfs" }, "clients": [] }
On a client host, mount the exported Ceph File System:
Syntax
mount -t nfs -o port=GANESHA_PORT HOST_NAME:BINDING LOCAL_MOUNT_POINT
Example
[root@client01 ~]# mount -t nfs -o port=2049 host01:/ceph/ /mnt/nfs-cephfs
To automatically mount on boot, open and edit the
/etc/fstab
file by adding a new line:Syntax
HOST_NAME:BINDING LOCAL_MOUNT_POINT nfs4 defaults,seclabel,vers=4.2,proto=tcp,port=2049 0 0
Example
host01:/ceph/ /mnt/nfs-cephfs nfs4 defaults,seclabel,vers=4.2,proto=tcp,port=2049 0 0
On a client host, to mount a exported NFS Ceph File System created with an
ingress
service:Syntax
mount -t nfs VIRTUAL_IP_ADDRESS:BINDING LOCAL_MOUNT_POINT
-
Replace VIRTUAL_IP_ADDRESS with
--ingress
--virtual-ip
IP address used to create the NFS cluster. - Replace BINDING with the pseudo root path.
Replace LOCAL_MOUNT_POINT with the mount point to mount the export on.
Example
[root@client01 ~]# mount -t nfs 10.10.128.75:/nfs-cephfs /mnt
This example mounts the export
nfs-cephfs
that exists on a NFS cluster created with--ingress --virtual-ip 10.10.128.75
on the mount point/mnt
.
-
Replace VIRTUAL_IP_ADDRESS with
6.4. Modifying the Ceph File System exports
You can modify the following parameters in an export with a configuration file:
-
access_type
- This can beRW
,RO
, orNONE
. -
squash
- This can beNo_Root_Squash
,None
, orRoot_Squash
. -
security_label
- This can betrue
orfalse
.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
- An NFS export created.
Procedure
View the export block based on the pseudo root name:
Syntax
ceph nfs export get CLUSTER_NAME BINDING
Example
[ceph: root@host01 /]# ceph nfs export get nfs-cephfs /ceph { "export_id": 1, "path": "/", "cluster_id": "nfs-cephfs", "pseudo": "/ceph", "access_type": "RO", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "cephnfs11", "fs_name": "cephfs", "sec_label_xattr": "" }, "clients": [] }
Export the configuration file:
Example
[ceph: root@host01 /]# ceph nfs export get nfs-cephfs /ceph > export.conf
Edit the export information:
Syntax
{ "export_id": EXPORT_ID, "path": "/", "cluster_id": "CLUSTER_NAME", "pseudo": "CLUSTER_PSEUDO_PATH", "access_type": "RW/RO", "squash": "SQUASH", "security_label": SECURITY_LABEL, "protocols": [ PROTOCOL_ID_ ], "transports": [ "TCP" ], "fsal": { "name": "NAME", "user_id": "USER_ID", "fs_name": "FILE_SYSTEM_NAME", "sec_label_xattr": "" }, "clients": [] }
Example
[ceph: root@host01 /]# vi export.conf { "export_id": 1, "path": "/", "cluster_id": "nfs-cephfs", "pseudo": "/ceph", "access_type": "RW", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "cephnfs11", "fs_name": "cephfs", "sec_label_xattr": "" }, "clients": [] }
In the above example, the
access_type
is modified fromRO
toRW
.Apply the specification:
Syntax
ceph nfs export apply CLUSTER_NAME PATH_TO_EXPORT_FILE
Example
[ceph: root@host01 /]# ceph nfs export apply nfs-cephfs -i export.conf Added export /ceph
Get the updated export information:
Syntax
ceph nfs export get CLUSTER_NAME BINDING
Example
[ceph: root@host01 /]# ceph nfs export get nfs-cephfs /ceph { "export_id": 1, "path": "/", "cluster_id": "nfs-cephfs", "pseudo": "/ceph", "access_type": "RW", "squash": "none", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "cephnfs11", "fs_name": "cephfs", "sec_label_xattr": "" }, "clients": [] }
6.5. Creating custom Ceph File System exports
You can customize the Ceph File System (CepFS) exports and apply the configuration.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
-
An NFS cluster created using the
ceph nfs cluster create
command. - A CephFS created.
Procedure
Create a custom file:
Example
[ceph: root@host01 /]# touch export_new.conf
Create an export using the custom file:
Syntax
EXPORT { Export_Id = EXPORT_ID; Transports = TCP/UDP; Path = PATH; Pseudo = PSEUDO_PATH; Protocols = NFS_PROTOCOLS; Access_Type = ACCESS_TYPE; Attr_Expiration_Time = EXPIRATION_TIME; Squash = SQUASH; FSAL { Name = NAME; Filesystem = "CEPH_FILE_SYSTEM_NAME"; User_Id = "USER_ID"; } }
Example
[ceph: root@host01 /]# cat export_new.conf EXPORT { Export_Id = 2; Transports = TCP; Path = /; Pseudo = /ceph1/; Protocols = 4; Access_Type = RW; Attr_Expiration_Time = 0; Squash = None; FSAL { Name = CEPH; Filesystem = "cephfs"; User_Id = "nfs.nfs-cephfs.2"; } }
Apply the specification:
Syntax
ceph nfs export apply CLUSTER_NAME -i PATH_TO_EXPORT_FILE
Example
[ceph: root@host01 /]# ceph nfs export apply nfs-cephfs -i new_export.conf Added export /ceph1
Get the updated export information:
Syntax
ceph nfs export get CLUSTER_NAME BINDING
Example
[ceph: root@host01 /]# ceph nfs export get nfs-cephfs /ceph1 { "export_id": 1, "path": "/", "cluster_id": "nfs-cephfs", "pseudo": "/ceph1", "access_type": "RW", "squash": "None", "security_label": true, "protocols": [ 4 ], "transports": [ "TCP" ], "fsal": { "name": "CEPH", "user_id": "nfs.nfs-cephfs.2", "fs_name": "cephfs", "sec_label_xattr": "" }, "clients": [] }
6.6. Deleting Ceph File System exports
You can delete the Ceph File System (CephFS) NFS exports with the ceph export rm
command.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
- A CephFS created.
Procedure
Delete a CephFS export:
Syntax
ceph nfs export rm CLUSTER_NAME BINDING
Example
[ceph: root@host01 /]# ceph nfs export rm nfs-cephfs /ceph
6.7. Deleting an NFS cluster
Delete an NFS cluster with the nfs cluster rm
command. This deletes the deployed cluster. The removal of NFS daemons and the ingress service is asynchronous. Check the status of removal with ceph orch ls
command.
Prerequisites
- A running, and healthy Red Hat Ceph Storage cluster.
- Root-level access to a Ceph Metadata Server (MDS) node.
-
NFS daemons deployed with
ceph nfs cluster create
command.
Procedure
Log into the Cephadm shell:
Example
[root@mds ~]# cephadm shell
Remove an NFS Ganesha cluster:
Syntax
ceph nfs cluster rm CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster rm nfs-cephfs