Chapter 11. Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability)
As a storage administrator, you can use the Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway. Cephadm deploys NFS Ganesha using a predefined RADOS pool and optional namespace.
This technology is Limited Availability. See the Deprecated functionality chapter for additional information.
Red Hat supports CephFS exports only over the NFS v4.0+ protocol.
This section covers the following administrative tasks:
- Creating the NFS-Ganesha cluster using the Ceph Orchestrator.
- Deploying the NFS-Ganesha gateway using the command line interface.
- Deploying the NFS-Ganesha gateway using the service specification.
- Implementing HA for CephFS/NFS service.
- Updating the NFS-Ganesha cluster using the Ceph Orchestrator.
- Viewing the NFS-Ganesha cluster information using the Ceph Orchestrator.
- Fetching the NFS-Ganesha cluster logs using the Ceph Orchestrator.
- Setting custom NFS-Ganesha configuration using the Ceph Orchestrator.
- Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator.
- Deleting the NFS-Ganesha cluster using the Ceph Orchestrator.
- Removing the NFS Ganesha gateway using the Ceph Orchestrator.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
11.1. Creating the NFS-Ganesha cluster using the Ceph Orchestrator
You can create an NFS-Ganesha cluster using the mgr/nfs
module of the Ceph Orchestrator. This module deploys the NFS cluster using Cephadm in the backend.
This creates a common recovery pool for all NFS-Ganesha daemons, new user based on clusterid
, and a common NFS-Ganesha config RADOS object.
For each daemon, a new user and a common configuration is created in the pool. Although all the clusters have different namespaces with respect to the cluster names, they use the same recovery pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Enable the
mgr/nfs
module:Example
[ceph: root@host01 /]# ceph mgr module enable nfs
Create the cluster:
Syntax
ceph nfs cluster create CLUSTER_NAME ["HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"]
The CLUSTER_NAME is an arbitrary string and HOST_NAME_1 is an optional string signifying the hosts to deploy NFS-Ganesha daemons.
Example
[ceph: root@host01 /]# ceph nfs cluster create nfsganesha "host01 host02" NFS Cluster Created Successful
This creates an NFS_Ganesha cluster
nfsganesha
with one daemon onhost01
andhost02
.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
Show NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
Additional Resources
- See Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide for more information.
- See Deploying the Ceph daemons using the service specification section in the Red Hat Ceph Storage Operations Guide for more information.
11.2. Deploying the NFS-Ganesha gateway using the command line interface
You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the placement specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.
Red Hat supports CephFS exports only over the NFS v4.0+ protocol.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Create the RADOS pool namespace, and enable the application. For RBD pools, enable RBD.
Syntax
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME freeform/rgw/rbd/cephfs/nfs rbd pool init -p POOL_NAME
Example
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha nfs [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
Deploy NFS-Ganesha gateway using placement specification in the command line interface:
Syntax
ceph orch apply nfs SERVICE_ID --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
Example
[ceph: root@host01 /]# ceph orch apply nfs foo --placement="2 host01 host02"
This deploys an NFS-Ganesha cluster
nfsganesha
with one daemon onhost01
andhost02
.
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Additional Resources
- See Deploying the Ceph daemons using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information.
- See the Creating a block device pool section in the Red Hat Ceph Storage Block Device Guide for more information.
11.3. Deploying the NFS-Ganesha gateway using the service specification
You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the service specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
Procedure
Create the
nfs.yaml
file:Example
[root@host01 ~]# touch nfs.yaml
Edit the
nfs.yaml
file to include the following details:Syntax
service_type: nfs service_id: SERVICE_ID placement: hosts: - HOST_NAME_1 - HOST_NAME_2
Example
service_type: nfs service_id: foo placement: hosts: - host01 - host02
Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
Create the RADOS pool, namespace, and enable RBD:
Syntax
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
Example
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha rbd [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy NFS-Ganesha gateway using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Additional Resources
- See the Creating a block device pool section in the Red Hat Ceph Storage Block Device Guide for more information.
11.4. Implementing HA for CephFS/NFS service (Technology Preview)
You can deploy NFS with a high-availability (HA) front-end, virtual IP, and load balancer, by using the --ingress
flag and by specifying a virtual IP address. This deploys a combination of keepalived
and haproxy
and provides a high-availability NFS frontend for the NFS service.
When a cluster is created with --ingress
flag, an ingress service is additionally deployed to provide load balancing and high-availability for the NFS servers. A virtual IP is used to provide a known, stable NFS endpoint that all NFS clients can use to mount. Ceph handles the details of redirecting NFS traffic on the virtual IP to the appropriate backend NFS servers and redeploys NFS servers when they fail.
Deploying an ingress service for an existing service provides:
- A stable, virtual IP that can be used to access the NFS server.
- Load distribution across multiple NFS gateways.
- Failover between hosts in the event of a host failure.
HA for CephFS/NFS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
When an ingress
service is deployed in front of the NFS cluster, the backend NFS-ganesha servers will see the haproxy’s IP address and not the client’s IP address. As a result, if you are restricting client access based on IP address, access restrictions for NFS exports will not work as expected.
If the active NFS server serving a client goes down, the client’s I/Os are interrupted until the replacement for the active NFS server is online and the NFS cluster is active again.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Create the NFS cluster with the
--ingress
flag:Syntax
ceph nfs cluster create CLUSTER_ID [PLACEMENT] [--port PORT_NUMBER] [--ingress --virtual-ip IP_ADDRESS/CIDR_PREFIX]
- Replace CLUSTER_ID with a unique string to name the NFS Ganesha cluster.
- Replace PLACEMENT with the number of NFS servers to deploy and the host or hosts that you want to deploy the NFS Ganesha daemon containers on.
-
Use the
--port
PORT_NUMBER flag to deploy NFS on a port other than the default port of 2049. -
The
--ingress
flag combined with the--virtual-ip
flag, deploys NFS with a high-availability front-end (virtual IP and load balancer). Replace
--virtual-ip
IP_ADDRESS with an IP address to provide a known, stable NFS endpoint that all clients can use to mount NFS exports. The--virtual-ip
must include a CIDR prefix length. The virtual IP will normally be configured on the first identified network interface that has an existing IP in the same subnet.NoteThe number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter. In the below example, one active NFS server is requested and two hosts are allocated.Example
[ceph: root@host01 /]# ceph nfs cluster create mycephnfs "1 host02 host03" --ingress --virtual-ip 10.10.128.75/22
NoteDeployment of NFS daemons and the ingress service is asynchronous and the command might return before the services have completely started.
Check that the services have successfully started:
Syntax
ceph orch ls --service_name=nfs.CLUSTER_ID ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=nfs.mycephnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT nfs.mycephnfs ?:12049 1/2 0s ago 20s host02;host03 [ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.mycephnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.mycephnfs 10.10.128.75:2049,9049 4/4 46s ago 73s count:2
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_ID
Example
[ceph: root@host01 /]# ceph nfs cluster info mycephnfs { "mycephnfs": { "virtual_ip": "10.10.128.75", "backend": [ { "hostname": "host02", "ip": "10.10.128.69", "port": 12049 }, { "hostname": "host03", "ip": "10.10.128.70", "port": 12049 } ], "port": 2049, "monitor_port": 9049 } }
List the hosts and processes:
Example
[ceph: root@host01 /]# ceph orch ps | grep nfs haproxy.nfs.cephnfs.host01.rftylv host01 *:2049,9000 running (11m) 10m ago 11m 23.2M - 2.2.19-7ea3822 5e6a41d77b38 f8cc61dc827e haproxy.nfs.cephnfs.host02.zhtded host02 *:2049,9000 running (11m) 53s ago 11m 21.3M - 2.2.19-7ea3822 5e6a41d77b38 4cad324e0e23 keepalived.nfs.cephnfs.host01.zktmsk host01 running (11m) 10m ago 11m 2349k - 2.1.5 18fa163ab18f 66bf39784993 keepalived.nfs.cephnfs.host02.vyycvp host02 running (11m) 53s ago 11m 2349k - 2.1.5 18fa163ab18f 1ecc95a568b4 nfs.cephnfs.0.0.host02.fescmw host02 *:12049 running (14m) 3m ago 14m 76.9M - 3.5 cef6e7959b0a bb0e4ee9484e nfs.cephnfs.1.0.host03.avaddf host03 *:12049 running (14m) 3m ago 14m 74.3M - 3.5 cef6e7959b0a ea02c0c50749
Additional resources
- For information about mounting NFS exports on client hosts, see the Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide.
11.5. Upgrading a standalone CephFS/NFS cluster for HA
As a storage administrator, you can upgrade a standalone storage cluster to a high-availability (HA) cluster by deploying the ingress
service on an existing NFS service.
Prerequisites
- A running Red Hat Ceph Storage cluster with an existing NFS service.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
List existing NFS clusters:
Example
[ceph: root@host01 /]# ceph nfs cluster ls mynfs
NoteIf a standalone NFS cluster is created on one node, you need to increase it to two or more nodes for HA. To increase the NFS service, edit the
nfs.yaml
file and increase the placements with the same port number.The number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter.Syntax
service_type: nfs service_id: SERVICE_ID placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count: COUNT spec: port: PORT_NUMBER
Example
service_type: nfs service_id: mynfs placement: hosts: - host02 - host03 count: 1 spec: port: 12345
In this example, the existing NFS service is running on port
12345
and an additional node is added to the NFS cluster with the same port.Apply the
nfs.yaml
service specification changes to upgrade to a two node NFS service:Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
Edit the
ingress.yaml
specification file with the existing NFS cluster ID:Syntax
service_type: SERVICE_TYPE service_id: SERVICE_ID placement: count: PLACEMENT spec: backend_service: SERVICE_ID_BACKEND 1 frontend_port: FRONTEND_PORT monitor_port: MONITOR_PORT 2 virtual_ip: VIRTUAL_IP_WITH_CIDR
Example
service_type: ingress service_id: nfs.mynfs placement: count: 2 spec: backend_service: nfs.mynfs frontend_port: 2049 monitor_port: 9000 virtual_ip: 10.10.128.75/22
Deploy the ingress service:
Example
[ceph: root@host01 /]# ceph orch apply -i ingress.yaml
NoteDeployment of NFS daemons and the ingress service is asynchronous and the command might return before the services have completely started.
Check that the ingress services have successfully started:
Syntax
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.mynfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.mynfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_ID
Example
[ceph: root@host01 /]# ceph nfs cluster info mynfs { "mynfs": { "virtual_ip": "10.10.128.75", "backend": [ { "hostname": "host02", "ip": "10.10.128.69", "port": 12049 }, { "hostname": "host03", "ip": "10.10.128.70", "port": 12049 } ], "port": 2049, "monitor_port": 9049 } }
List the hosts and processes:
Example
[ceph: root@host01 /]# ceph orch ps | grep nfs haproxy.nfs.mynfs.host01.ruyyhq host01 *:2049,9000 running (27m) 6m ago 34m 9.85M - 2.2.19-7ea3822 5e6a41d77b38 328d27b3f706 haproxy.nfs.mynfs.host02.ctrhha host02 *:2049,9000 running (34m) 6m ago 34m 4944k - 2.2.19-7ea3822 5e6a41d77b38 4f4440dbfde9 keepalived.nfs.mynfs.host01.fqgjxd host01 running (27m) 6m ago 34m 31.2M - 2.1.5 18fa163ab18f 0e22b2b101df keepalived.nfs.mynfs.host02.fqzkxb host02 running (34m) 6m ago 34m 17.5M - 2.1.5 18fa163ab18f c1e3cc074cf8 nfs.mynfs.0.0.host02.emoaut host02 *:12345 running (37m) 6m ago 37m 82.7M - 3.5 91322de4f795 2d00faaa2ae5 nfs.mynfs.1.0.host03.nsxcfd host03 *:12345 running (37m) 6m ago 37m 81.1M - 3.5 91322de4f795 d4bda4074f17
Additional resources
- For information about mounting NFS exports on client hosts, see the Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide.
11.6. Deploying HA for CephFS/NFS using a specification file
You can deploy HA for CephFS/NFS using a specification file by first deploying an NFS service and then deploying ingress
to the same NFS service.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Ensure the NFS module is enabled:
Example
[ceph: root@host01 /]# ceph mgr module ls | more
Exit out of the Cephadm shell and create the
nfs.yaml
file:Example
[root@host01 ~]# touch nfs.yaml
Edit the
nfs.yaml
file to include the following details:Syntax
service_type: nfs service_id: SERVICE_ID placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count: COUNT spec: port: PORT_NUMBER
NoteThe number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter.Example
service_type: nfs service_id: cephfsnfs placement: hosts: - host02 - host03 count: 1 spec: port: 12345
In this example, the server is run on the non-default port of
12345
, instead of the default port of2049
, on host02 and host03.Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
Log into the Cephadm shell and navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy the NFS service using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
NoteDeployment of the NFS service is asynchronous and the command might return before the services have completely started.
Check that the NFS services have successfully started:
Syntax
ceph orch ls --service_name=nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT nfs.cephfsnfs ?:12345 2/2 3m ago 13m host02;host03
Exit out of the Cephadm shell and create the
ingress.yaml
file:Example
[root@host01 ~]# touch ingress.yaml
Edit the
ingress.yaml
file to include the following details:Syntax
service_type: SERVICE_TYPE service_id: SERVICE_ID placement: count: PLACEMENT spec: backend_service: SERVICE_ID_BACKEND frontend_port: FRONTEND_PORT monitor_port: MONITOR_PORT virtual_ip: VIRTUAL_IP_WITH_CIDR
Example
service_type: ingress service_id: nfs.cephfsnfs placement: count: 2 spec: backend_service: nfs.cephfsnfs frontend_port: 2049 monitor_port: 9000 virtual_ip: 10.10.128.75/22
NoteIn this example,
placement: count: 2
deploys thekeepalived
andhaproxy
service on random nodes. To specify the nodes to deploykeepalived
andhaproxy
on, use theplacement: hosts
option:Example
service_type: ingress service_id: nfs.cephfsnfs placement: hosts: - host02 - host03
Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount ingress.yaml:/var/lib/ceph/ingress.yaml
Log into the Cephadm shell and navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy the
ingress
service using service specification:Syntax
ceph orch apply -i FILE_NAME.yaml
Example
[ceph: root@host01 ceph]# ceph orch apply -i ingress.yaml
Check that the ingress services have successfully started:
Syntax
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.cephfsnfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_ID
Example
[ceph: root@host01 /]# ceph nfs cluster info cephfsnfs { "cephfsnfs": { "virtual_ip": "10.10.128.75", "backend": [ { "hostname": "host02", "ip": "10.10.128.69", "port": 12345 }, { "hostname": "host03", "ip": "10.10.128.70", "port": 12345 } ], "port": 2049, "monitor_port": 9049 } }
List the hosts and processes:
Example
[ceph: root@host01 /]# ceph orch ps | grep nfs haproxy.nfs.cephfsnfs.host01.ruyyhq host01 *:2049,9000 running (27m) 6m ago 34m 9.85M - 2.2.19-7ea3822 5e6a41d77b38 328d27b3f706 haproxy.nfs.cephfsnfs.host02.ctrhha host02 *:2049,9000 running (34m) 6m ago 34m 4944k - 2.2.19-7ea3822 5e6a41d77b38 4f4440dbfde9 keepalived.nfs.cephfsnfs.host01.fqgjxd host01 running (27m) 6m ago 34m 31.2M - 2.1.5 18fa163ab18f 0e22b2b101df keepalived.nfs.cephfsnfs.host02.fqzkxb host02 running (34m) 6m ago 34m 17.5M - 2.1.5 18fa163ab18f c1e3cc074cf8 nfs.cephfsnfs.0.0.host02.emoaut host02 *:12345 running (37m) 6m ago 37m 82.7M - 3.5 91322de4f795 2d00faaa2ae5 nfs.cephfsnfs.1.0.host03.nsxcfd host03 *:12345 running (37m) 6m ago 37m 81.1M - 3.5 91322de4f795 d4bda4074f17
Additional resources
- For information about mounting NFS exports on client hosts, see the Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide.
11.7. Updating the NFS-Ganesha cluster using the Ceph Orchestrator
You can update the NFS-Ganesha cluster by changing the placement of the daemons on the hosts using the Ceph Orchestrator with Cephadm in the backend.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Update the cluster:
Syntax
ceph orch apply nfs CLUSTER_NAME ["HOST_NAME_1,HOST_NAME_2,HOST_NAME_3"]
The CLUSTER_NAME is an arbitrary string, HOST_NAME_1 is an optional string signifying the hosts to update the deployed NFS-Ganesha daemons.
Example
[ceph: root@host01 /]# ceph orch apply nfs nfsganesha "host02"
This updates the
nfsganesha
cluster onhost02
.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
Show NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Additional Resources
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.8. Viewing the NFS-Ganesha cluster information using the Ceph Orchestrator
You can view the information of the NFS-Ganesha cluster using the Ceph Orchestrator. You can get the information about all the NFS Ganesha clusters or specific clusters with their port, IP address and the name of the hosts on which the cluster is created.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
View the NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha { "nfsganesha": [ { "hostname": "host02", "ip": [ "10.10.128.70" ], "port": 2049 } ] }
Additional Resources
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.9. Fetching the NFS-Ganesha cluster logs using the Ceph Orchestrator
With the Ceph Orchestrator, you can fetch the NFS-Ganesha cluster logs. You need to be on the node where the service is deployed.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Cephadm installed on the nodes where NFS is deployed.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
As a root user, fetch the FSID of the storage cluster:
Example
[root@host03 ~]# cephadm ls
Copy the FSID and the name of the service.
Fetch the logs:
Syntax
cephadm logs --fsid FSID --name SERVICE_NAME
Example
[root@host03 ~]# cephadm logs --fsid 499829b4-832f-11eb-8d6d-001a4a000635 --name nfs.foo.host03
Additional Resources
- See Deploying the Ceph daemons using the placement specification section in the Red Hat Ceph Storage Operations Guide for more information.
11.10. Setting custom NFS-Ganesha configuration using the Ceph Orchestrator
The NFS-Ganesha cluster is defined in default configuration blocks. Using Ceph Orchestrator you can customize the configuration and that will have precedence over the default configuration blocks.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
The following is an example of the default configuration of NFS-Ganesha cluster:
Example
# {{ cephadm_managed }} NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 4; } MDCACHE { Dir_Chunk = 0; } EXPORT_DEFAULTS { Attr_Expiration_Time = 0; } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 1, 2; } RADOS_KV { UserId = "{{ user }}"; nodeid = "{{ nodeid}}"; pool = "{{ pool }}"; namespace = "{{ namespace }}"; } RADOS_URLS { UserId = "{{ user }}"; watch_url = "{{ url }}"; } RGW { cluster = "ceph"; name = "client.{{ rgw_user }}"; } %url {{ url }}
Customize the NFS-Ganesha cluster configuration. The following are two examples for customizing the configuration:
Change the log level:
Example
LOG { COMPONENTS { ALL = FULL_DEBUG; } }
Add custom export block:
Create the user.
NoteUser specified in FSAL blocks should have proper caps for NFS-Ganesha daemons to access the Ceph cluster.
Syntax
ceph auth get-or-create client.USER_ID mon 'allow r' osd 'allow rw pool=.nfs namespace=NFS_CLUSTER_NAME, allow rw tag cephfs data=FS_NAME' mds 'allow rw path=EXPORT_PATH'
Example
[ceph: root@host01 /]# ceph auth get-or-create client.f64f341c-655d-11eb-8778-fa163e914bcc mon 'allow r' osd 'allow rw pool=.nfs namespace=nfs_cluster_name, allow rw tag cephfs data=fs_name' mds 'allow rw path=export_path'
Navigate to the following directory:
Syntax
cd /var/lib/ceph/DAEMON_PATH/
Example
[ceph: root@host01 /]# cd /var/lib/ceph/nfs/
If the
nfs
directory does not exist, create a directory in the path.Create a new configuration file:
Syntax
touch PATH_TO_CONFIG_FILE
Example
[ceph: root@host01 nfs]# touch nfs-ganesha.conf
Edit the configuration file by adding the custom export block. It creates a single export and that is managed by the Ceph NFS export interface.
Syntax
EXPORT { Export_Id = NUMERICAL_ID; Transports = TCP; Path = PATH_WITHIN_CEPHFS; Pseudo = BINDING; Protocols = 4; Access_Type = PERMISSIONS; Attr_Expiration_Time = 0; Squash = None; FSAL { Name = CEPH; Filesystem = "FILE_SYSTEM_NAME"; User_Id = "USER_NAME"; Secret_Access_Key = "USER_SECRET_KEY"; } }
Example
EXPORT { Export_Id = 100; Transports = TCP; Path = /; Pseudo = /ceph/; Protocols = 4; Access_Type = RW; Attr_Expiration_Time = 0; Squash = None; FSAL { Name = CEPH; Filesystem = "filesystem name"; User_Id = "user id"; Secret_Access_Key = "secret key"; } }
Apply the new configuration the cluster:
Syntax
ceph nfs cluster config set CLUSTER_NAME -i PATH_TO_CONFIG_FILE
Example
[ceph: root@host01 nfs]# ceph nfs cluster config set nfs-ganesha -i /var/lib/ceph/nfs/nfs-ganesha.conf
This also restarts the service for the custom configuration.
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Verify the custom configuration:
Syntax
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
Example
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
Additional Resources
- See the Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.11. Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator
Using the Ceph Orchestrator, you can reset the user-defined configuration to the default configuration.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha deployed using the
mgr/nfs
module. - Custom NFS cluster configuration is set-up
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Reset the NFS-Ganesha configuration:
Syntax
ceph nfs cluster config reset CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster config reset nfs-cephfs
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Verify the custom configuration is deleted:
Syntax
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
Example
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
Additional Resources
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
- See the Setting custom NFS-Ganesha configuration using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information..
11.12. Deleting the NFS-Ganesha cluster using the Ceph Orchestrator
You can use the Ceph Orchestrator with Cephadm in the backend to delete the NFS-Ganesha cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Delete the cluster:
Syntax
ceph nfs cluster rm CLUSTER_NAME
The CLUSTER_NAME is an arbitrary string.
Example
[ceph: root@host01 /]# ceph nfs cluster rm nfsganesha NFS Cluster Deleted Successfully
NoteThe
delete
option is deprecated and you need to userm
to delete an NFS cluster.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
Additional Resources
- See Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System guide for more information.
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.13. Removing the NFS-Ganesha gateway using the Ceph Orchestrator
You can remove the NFS-Ganesha gateway using the ceph orch rm
command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- At least one NFS-Ganesha gateway deployed on the hosts.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
Remove the service:
Syntax
ceph orch rm SERVICE_NAME
Example
[ceph: root@host01 /]# ceph orch rm nfs.foo
Verification
List the hosts, daemons, and processes:
Syntax
ceph orch ps
Example
[ceph: root@host01 /]# ceph orch ps
Additional Resources
- See Deploying the Ceph daemons using the service specification section in the Red Hat Ceph Storage Operations Guide for more information.
- See Deploying the NFS-Ganesha gateway using the service specification section in the Red Hat Ceph Storage Operations Guide for more information.