Chapter 11. Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability)
As a storage administrator, you can use the Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway. Cephadm deploys NFS Ganesha using a predefined RADOS pool and optional namespace.
This technology is Limited Availability. See the Deprecated functionality chapter for additional information.
Red Hat supports CephFS exports only over the NFS v4.0+ protocol.
This section covers the following administrative tasks:
- Creating the NFS-Ganesha cluster using the Ceph Orchestrator.
- Deploying the NFS-Ganesha gateway using the command line interface.
- Deploying the NFS-Ganesha gateway using the service specification.
- Implementing HA for CephFS/NFS service.
- Updating the NFS-Ganesha cluster using the Ceph Orchestrator.
- Viewing the NFS-Ganesha cluster information using the Ceph Orchestrator.
- Fetching the NFS-Ganesha cluster logs using the Ceph Orchestrator.
- Setting custom NFS-Ganesha configuration using the Ceph Orchestrator.
- Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator.
- Deleting the NFS-Ganesha cluster using the Ceph Orchestrator.
- Removing the NFS Ganesha gateway using the Ceph Orchestrator.
11.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
11.2. Creating the NFS-Ganesha cluster using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can create an NFS-Ganesha cluster using the mgr/nfs
module of the Ceph Orchestrator. This module deploys the NFS cluster using Cephadm in the backend.
This creates a common recovery pool for all NFS-Ganesha daemons, new user based on clusterid
, and a common NFS-Ganesha config RADOS object.
For each daemon, a new user and a common configuration is created in the pool. Although all the clusters have different namespaces with respect the cluster names, they use the same recovery pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
mgr/nfs
module:Example
[ceph: root@host01 /]# ceph mgr module enable nfs
[ceph: root@host01 /]# ceph mgr module enable nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the cluster:
Syntax
ceph nfs cluster create CLUSTER_NAME ["HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"]
ceph nfs cluster create CLUSTER_NAME ["HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The CLUSTER_NAME is an arbitrary string and HOST_NAME_1 is an optional string signifying the hosts to deploy NFS-Ganesha daemons.
Example
[ceph: root@host01 /]# ceph nfs cluster create nfsganesha "host01 host02" NFS Cluster Created Successful
[ceph: root@host01 /]# ceph nfs cluster create nfsganesha "host01 host02" NFS Cluster Created Successful
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This creates an NFS_Ganesha cluster
nfsganesha
with one daemon onhost01
andhost02
.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
[ceph: root@host01 /]# ceph nfs cluster ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
ceph nfs cluster info CLUSTER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3. Deploying the NFS-Ganesha gateway using the command line interface Copy linkLink copied to clipboard!
You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the placement specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.
Red Hat supports CephFS exports only over the NFS v4.0+ protocol.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the RADOS pool namespace, and enable the application. For RBD pools, enable RBD.
Syntax
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME freeform/rgw/rbd/cephfs/nfs rbd pool init -p POOL_NAME
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME freeform/rgw/rbd/cephfs/nfs rbd pool init -p POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha nfs [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha nfs [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy NFS-Ganesha gateway using placement specification in the command line interface:
Syntax
ceph orch apply nfs SERVICE_ID --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
ceph orch apply nfs SERVICE_ID --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch apply nfs foo --placement="2 host01 host02"
[ceph: root@host01 /]# ceph orch apply nfs foo --placement="2 host01 host02"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This deploys an NFS-Ganesha cluster
nfsganesha
with one daemon onhost01
andhost02
.
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.4. Deploying the NFS-Ganesha gateway using the service specification Copy linkLink copied to clipboard!
You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the service specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
Procedure
Create the
nfs.yaml
file:Example
touch nfs.yaml
[root@host01 ~]# touch nfs.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
nfs.yaml
file to include the following details:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the YAML file under a directory in the container:
Example
cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
[root@host01 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the RADOS pool, namespace, and enable RBD:
Syntax
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha rbd [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha rbd [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
[ceph: root@host01 /]# cd /var/lib/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy NFS-Ganesha gateway using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
ceph orch apply -i FILE_NAME.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Implementing HA for CephFS/NFS service (Technology Preview) Copy linkLink copied to clipboard!
You can deploy NFS with a high-availability (HA) front-end, virtual IP, and load balancer, by using the --ingress
flag and by specifying a virtual IP address. This deploys a combination of keepalived
and haproxy
and provides a high-availability NFS frontend for the NFS service.
When a cluster is created with --ingress
flag, an ingress service is additionally deployed to provide load balancing and high-availability for the NFS servers. A virtual IP is used to provide a known, stable NFS endpoint that all NFS clients can use to mount. Ceph handles the details of redirecting NFS traffic on the virtual IP to the appropriate backend NFS servers and redeploys NFS servers when they fail.
Deploying an ingress service for an existing service provides:
- A stable, virtual IP that can be used to access the NFS server.
- Load distribution across multiple NFS gateways.
- Failover between hosts in the event of a host failure.
HA for CephFS/NFS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
When an ingress
service is deployed in front of the NFS cluster, the backend NFS-ganesha servers will see the haproxy’s IP address and not the client’s IP address. As a result, if you are restricting client access based on IP address, access restrictions for NFS exports will not work as expected.
If the active NFS server serving a client goes down, the client’s I/Os are interrupted until the replacement for the active NFS server is online and the NFS cluster is active again.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the NFS cluster with the
--ingress
flag:Syntax
ceph nfs cluster create CLUSTER_ID [PLACEMENT] [--port PORT_NUMBER] [--ingress --virtual-ip IP_ADDRESS/CIDR_PREFIX]
ceph nfs cluster create CLUSTER_ID [PLACEMENT] [--port PORT_NUMBER] [--ingress --virtual-ip IP_ADDRESS/CIDR_PREFIX]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace CLUSTER_ID with a unique string to name the NFS Ganesha cluster.
- Replace PLACEMENT with the number of NFS servers to deploy and the host or hosts that you want to deploy the NFS Ganesha daemon containers on.
-
Use the
--port
PORT_NUMBER flag to deploy NFS on a port other than the default port of 2049. -
The
--ingress
flag combined with the--virtual-ip
flag, deploys NFS with a high-availability front-end (virtual IP and load balancer). Replace
--virtual-ip
IP_ADDRESS with an IP address to provide a known, stable NFS endpoint that all clients can use to mount NFS exports. The--virtual-ip
must include a CIDR prefix length. The virtual IP will normally be configured on the first identified network interface that has an existing IP in the same subnet.NoteThe number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter. In the below example, one active NFS server is requested and two hosts are allocated.Example
[ceph: root@host01 /]# ceph nfs cluster create mycephnfs "1 host02 host03" --ingress --virtual-ip 10.10.128.75/22
[ceph: root@host01 /]# ceph nfs cluster create mycephnfs "1 host02 host03" --ingress --virtual-ip 10.10.128.75/22
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDeployment of NFS daemons and the ingress service is asynchronous and the command might return before the services have completely started.
Check that the services have successfully started:
Syntax
ceph orch ls --service_name=nfs.CLUSTER_ID ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
ceph orch ls --service_name=nfs.CLUSTER_ID ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_ID
ceph nfs cluster info CLUSTER_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts and processes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Upgrading a standalone CephFS/NFS cluster for HA Copy linkLink copied to clipboard!
As a storage administrator, you can upgrade a standalone storage cluster to a high-availability (HA) cluster by deploying the ingress
service on an existing NFS service.
Prerequisites
- A running Red Hat Ceph Storage cluster with an existing NFS service.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List existing NFS clusters:
Example
[ceph: root@host01 /]# ceph nfs cluster ls mynfs
[ceph: root@host01 /]# ceph nfs cluster ls mynfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a standalone NFS cluster is created on one node, you need to increase it to two or more nodes for HA. To increase the NFS service, edit the
nfs.yaml
file and increase the placements with the same port number.The number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter.Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the existing NFS service is running on port
12345
and an additional node is added to the NFS cluster with the same port.Apply the
nfs.yaml
service specification changes to upgrade to a two node NFS service:Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
ingress.yaml
specification file with the existing NFS cluster ID:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the ingress service:
Example
[ceph: root@host01 /]# ceph orch apply -i ingress.yaml
[ceph: root@host01 /]# ceph orch apply -i ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDeployment of NFS daemons and the ingress service is asynchronous and the command might return before the services have completely started.
Check that the ingress services have successfully started:
Syntax
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.mynfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.mynfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.mynfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.mynfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_ID
ceph nfs cluster info CLUSTER_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts and processes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.7. Deploying HA for CephFS/NFS using a specification file Copy linkLink copied to clipboard!
You can deploy HA for CephFS/NFS using a specification file by first deploying an NFS service and then deploying ingress
to the same NFS service.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure the NFS module is enabled:
Example
[ceph: root@host01 /]# ceph mgr module ls | more
[ceph: root@host01 /]# ceph mgr module ls | more
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit out of the Cephadm shell and create the
nfs.yaml
file:Example
touch nfs.yaml
[root@host01 ~]# touch nfs.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
nfs.yaml
file to include the following details:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, the server is run on the non-default port of
12345
, instead of the default port of2049
, on host02 and host03.Mount the YAML file under a directory in the container:
Example
cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
[root@host01 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the Cephadm shell and navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
[ceph: root@host01 /]# cd /var/lib/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the NFS service using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
ceph orch apply -i FILE_NAME.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDeployment of the NFS service is asynchronous and the command might return before the services have completely started.
Check that the NFS services have successfully started:
Syntax
ceph orch ls --service_name=nfs.CLUSTER_ID
ceph orch ls --service_name=nfs.CLUSTER_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ls --service_name=nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT nfs.cephfsnfs ?:12345 2/2 3m ago 13m host02;host03
[ceph: root@host01 /]# ceph orch ls --service_name=nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT nfs.cephfsnfs ?:12345 2/2 3m ago 13m host02;host03
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Exit out of the Cephadm shell and create the
ingress.yaml
file:Example
touch ingress.yaml
[root@host01 ~]# touch ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
ingress.yaml
file to include the following details:Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn this example,
placement: count: 2
deploys thekeepalived
andhaproxy
service on random nodes. To specify the nodes to deploykeepalived
andhaproxy
on, use theplacement: hosts
option:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the YAML file under a directory in the container:
Example
cephadm shell --mount ingress.yaml:/var/lib/ceph/ingress.yaml
[root@host01 ~]# cephadm shell --mount ingress.yaml:/var/lib/ceph/ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log into the Cephadm shell and navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
[ceph: root@host01 /]# cd /var/lib/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the
ingress
service using service specification:Syntax
ceph orch apply -i FILE_NAME.yaml
ceph orch apply -i FILE_NAME.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 ceph]# ceph orch apply -i ingress.yaml
[ceph: root@host01 ceph]# ceph orch apply -i ingress.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the ingress services have successfully started:
Syntax
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.cephfsnfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.cephfsnfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_ID
ceph nfs cluster info CLUSTER_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts and processes:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.8. Updating the NFS-Ganesha cluster using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can update the NFS-Ganesha cluster by changing the placement of the daemons on the hosts using the Ceph Orchestrator with Cephadm in the backend.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the cluster:
Syntax
ceph orch apply nfs CLUSTER_NAME ["HOST_NAME_1,HOST_NAME_2,HOST_NAME_3"]
ceph orch apply nfs CLUSTER_NAME ["HOST_NAME_1,HOST_NAME_2,HOST_NAME_3"]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The CLUSTER_NAME is an arbitrary string, HOST_NAME_1 is an optional string signifying the hosts to update the deployed NFS-Ganesha daemons.
Example
[ceph: root@host01 /]# ceph orch apply nfs nfsganesha "host02"
[ceph: root@host01 /]# ceph orch apply nfs nfsganesha "host02"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This updates the
nfsganesha
cluster onhost02
.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
[ceph: root@host01 /]# ceph nfs cluster ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Show NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
ceph nfs cluster info CLUSTER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.9. Viewing the NFS-Ganesha cluster information using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can view the information of the NFS-Ganesha cluster using the Ceph Orchestrator. You can get the information about all the NFS Ganesha clusters or specific clusters with their port, IP address and the name of the hosts on which the cluster is created.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
ceph nfs cluster info CLUSTER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.10. Fetching the NFS-Ganesha cluster logs using the Ceph Orchestrator Copy linkLink copied to clipboard!
With the Ceph Orchestrator, you can fetch the NFS-Ganesha cluster logs. You need to be on the node where the service is deployed.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Cephadm installed on the nodes where NFS is deployed.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
As a root user, fetch the FSID of the storage cluster:
Example
cephadm ls
[root@host03 ~]# cephadm ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the FSID and the name of the service.
Fetch the logs:
Syntax
cephadm logs --fsid FSID --name SERVICE_NAME
cephadm logs --fsid FSID --name SERVICE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
cephadm logs --fsid 499829b4-832f-11eb-8d6d-001a4a000635 --name nfs.foo.host03
[root@host03 ~]# cephadm logs --fsid 499829b4-832f-11eb-8d6d-001a4a000635 --name nfs.foo.host03
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.11. Setting custom NFS-Ganesha configuration using the Ceph Orchestrator Copy linkLink copied to clipboard!
The NFS-Ganesha cluster is defined in default configuration blocks. Using Ceph Orchestrator you can customize the configuration and that will have precedence over the default configuration blocks.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following is an example of the default configuration of NFS-Ganesha cluster:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Customize the NFS-Ganesha cluster configuration. The following are two examples for customizing the configuration:
Change the log level:
Example
LOG { COMPONENTS { ALL = FULL_DEBUG; } }
LOG { COMPONENTS { ALL = FULL_DEBUG; } }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add custom export block:
Create the user.
NoteUser specified in FSAL blocks should have proper caps for NFS-Ganesha daemons to access the Ceph cluster.
Syntax
ceph auth get-or-create client.USER_ID mon 'allow r' osd 'allow rw pool=.nfs namespace=NFS_CLUSTER_NAME, allow rw tag cephfs data=FS_NAME' mds 'allow rw path=EXPORT_PATH'
ceph auth get-or-create client.USER_ID mon 'allow r' osd 'allow rw pool=.nfs namespace=NFS_CLUSTER_NAME, allow rw tag cephfs data=FS_NAME' mds 'allow rw path=EXPORT_PATH'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph auth get-or-create client.f64f341c-655d-11eb-8778-fa163e914bcc mon 'allow r' osd 'allow rw pool=.nfs namespace=nfs_cluster_name, allow rw tag cephfs data=fs_name' mds 'allow rw path=export_path'
[ceph: root@host01 /]# ceph auth get-or-create client.f64f341c-655d-11eb-8778-fa163e914bcc mon 'allow r' osd 'allow rw pool=.nfs namespace=nfs_cluster_name, allow rw tag cephfs data=fs_name' mds 'allow rw path=export_path'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the following directory:
Syntax
cd /var/lib/ceph/DAEMON_PATH/
cd /var/lib/ceph/DAEMON_PATH/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# cd /var/lib/ceph/nfs/
[ceph: root@host01 /]# cd /var/lib/ceph/nfs/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
nfs
directory does not exist, create a directory in the path.Create a new configuration file:
Syntax
touch PATH_TO_CONFIG_FILE
touch PATH_TO_CONFIG_FILE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 nfs]# touch nfs-ganesha.conf
[ceph: root@host01 nfs]# touch nfs-ganesha.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the configuration file by adding the custom export block. It creates a single export and that is managed by the Ceph NFS export interface.
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Apply the new configuration the cluster:
Syntax
ceph nfs cluster config set CLUSTER_NAME -i PATH_TO_CONFIG_FILE
ceph nfs cluster config set CLUSTER_NAME -i PATH_TO_CONFIG_FILE
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 nfs]# ceph nfs cluster config set nfs-ganesha -i /var/lib/ceph/nfs/nfs-ganesha.conf
[ceph: root@host01 nfs]# ceph nfs cluster config set nfs-ganesha -i /var/lib/ceph/nfs/nfs-ganesha.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This also restarts the service for the custom configuration.
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the custom configuration:
Syntax
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.12. Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator Copy linkLink copied to clipboard!
Using the Ceph Orchestrator, you can reset the user-defined configuration to the default configuration.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha deployed using the
mgr/nfs
module. - Custom NFS cluster configuration is set-up
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reset the NFS-Ganesha configuration:
Syntax
ceph nfs cluster config reset CLUSTER_NAME
ceph nfs cluster config reset CLUSTER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph nfs cluster config reset nfs-cephfs
[ceph: root@host01 /]# ceph nfs cluster config reset nfs-cephfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the custom configuration is deleted:
Syntax
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.13. Deleting the NFS-Ganesha cluster using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can use the Ceph Orchestrator with Cephadm in the backend to delete the NFS-Ganesha cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cluster:
Syntax
ceph nfs cluster rm CLUSTER_NAME
ceph nfs cluster rm CLUSTER_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The CLUSTER_NAME is an arbitrary string.
Example
[ceph: root@host01 /]# ceph nfs cluster rm nfsganesha NFS Cluster Deleted Successfully
[ceph: root@host01 /]# ceph nfs cluster rm nfsganesha NFS Cluster Deleted Successfully
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
[ceph: root@host01 /]# ceph nfs cluster ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.14. Removing the NFS-Ganesha gateway using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can remove the NFS-Ganesha gateway using the ceph orch rm
command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- At least one NFS-Ganesha gateway deployed on the hosts.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the service:
Syntax
ceph orch rm SERVICE_NAME
ceph orch rm SERVICE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch rm nfs.foo
[ceph: root@host01 /]# ceph orch rm nfs.foo
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts, daemons, and processes:
Syntax
ceph orch ps
ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps
[ceph: root@host01 /]# ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow