Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 11. Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability)
As a storage administrator, you can use the Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway. Cephadm deploys NFS Ganesha using a predefined RADOS pool and optional namespace.
This technology is Limited Availability. See the Deprecated functionality chapter for additional information.
Red Hat supports CephFS exports only over the NFS v4.0+ protocol.
This section covers the following administrative tasks:
- Creating the NFS-Ganesha cluster using the Ceph Orchestrator.
- Deploying the NFS-Ganesha gateway using the command line interface.
- Deploying the NFS-Ganesha gateway using the service specification.
- Implementing HA for CephFS/NFS service.
- Updating the NFS-Ganesha cluster using the Ceph Orchestrator.
- Viewing the NFS-Ganesha cluster information using the Ceph Orchestrator.
- Fetching the NFS-Ganesha cluster logs using the Ceph Orchestrator.
- Setting custom NFS-Ganesha configuration using the Ceph Orchestrator.
- Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator.
- Deleting the NFS-Ganesha cluster using the Ceph Orchestrator.
- Removing the NFS Ganesha gateway using the Ceph Orchestrator.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
11.1. Creating the NFS-Ganesha cluster using the Ceph Orchestrator
You can create an NFS-Ganesha cluster using the mgr/nfs
module of the Ceph Orchestrator. This module deploys the NFS cluster using Cephadm in the backend.
This creates a common recovery pool for all NFS-Ganesha daemons, new user based on clusterid
, and a common NFS-Ganesha config RADOS object.
For each daemon, a new user and a common configuration is created in the pool. Although all the clusters have different namespaces with respect to the cluster names, they use the same recovery pool.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Enable the
mgr/nfs
module:Example
[ceph: root@host01 /]# ceph mgr module enable nfs
Create the cluster:
Syntax
ceph nfs cluster create CLUSTER_NAME ["HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"]
The CLUSTER_NAME is an arbitrary string and HOST_NAME_1 is an optional string signifying the hosts to deploy NFS-Ganesha daemons.
Example
[ceph: root@host01 /]# ceph nfs cluster create nfsganesha "host01 host02" NFS Cluster Created Successful
This creates an NFS_Ganesha cluster
nfsganesha
with one daemon onhost01
andhost02
.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
Show NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
Additional Resources
- See Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide for more information.
- See Deploying the Ceph daemons using the service specification section in the Red Hat Ceph Storage Operations Guide for more information.
11.2. Deploying the NFS-Ganesha gateway using the command line interface
You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the placement specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.
Red Hat supports CephFS exports only over the NFS v4.0+ protocol.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Create the RADOS pool namespace, and enable the application. For RBD pools, enable RBD.
Syntax
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME freeform/rgw/rbd/cephfs/nfs rbd pool init -p POOL_NAME
Example
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha nfs [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
Deploy NFS-Ganesha gateway using placement specification in the command line interface:
Syntax
ceph orch apply nfs SERVICE_ID --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
Example
[ceph: root@host01 /]# ceph orch apply nfs foo --placement="2 host01 host02"
This deploys an NFS-Ganesha cluster
nfsganesha
with one daemon onhost01
andhost02
.
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Additional Resources
- See Deploying the Ceph daemons using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information.
- See the Creating a block device pool section in the Red Hat Ceph Storage Block Device Guide for more information.
11.3. Deploying the NFS-Ganesha gateway using the service specification
You can use the Ceph Orchestrator with Cephadm in the backend to deploy the NFS-Ganesha gateway using the service specification. In this case, you have to create a RADOS pool and create a namespace before deploying the gateway.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
Procedure
Create the
nfs.yaml
file:Example
[root@host01 ~]# touch nfs.yaml
Edit the nfs.yaml specification file.
Syntax
service_type: nfs service_id: SERVICE_ID placement: hosts: - HOST_NAME_1 - HOST_NAME_2
Example
# cat nfs.yaml service_type: nfs service_id: foo placement: hosts: - host01 - host02
Optional: Enable the NLM to support locking for NFS protocol v3 support by adding
enable_nlm: true
in theganesha.yaml
specification file.Syntax
service_type: nfs service_id: SERVICE_ID placement: hosts: - HOST_NAME_1 - HOST_NAME_2 spec: enable_nlm: true
Example
# cat ganesha.yaml service_type: nfs service_id: foo placement: hosts: - host01 - host02 spec: enable_nlm: true
Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
Create the RADOS pool, namespace, and enable RBD:
Syntax
ceph osd pool create POOL_NAME ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME
Example
[ceph: root@host01 /]# ceph osd pool create nfs-ganesha [ceph: root@host01 /]# ceph osd pool application enable nfs-ganesha rbd [ceph: root@host01 /]# rbd pool init -p nfs-ganesha
Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy NFS-Ganesha gateway using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Additional Resources
- See the Creating a block device pool section in the Red Hat Ceph Storage Block Device Guide for more information.
11.4. Implementing HA for CephFS/NFS service (Technology Preview)
You can deploy NFS with a high-availability (HA) front-end, virtual IP, and load balancer, by using the --ingress
flag and by specifying a virtual IP address. This deploys a combination of keepalived
and haproxy
and provides a high-availability NFS frontend for the NFS service.
When a cluster is created with --ingress
flag, an ingress service is additionally deployed to provide load balancing and high-availability for the NFS servers. A virtual IP is used to provide a known, stable NFS endpoint that all NFS clients can use to mount. Ceph handles the details of redirecting NFS traffic on the virtual IP to the appropriate backend NFS servers and redeploys NFS servers when they fail.
Deploying an ingress service for an existing service provides:
- A stable, virtual IP that can be used to access the NFS server.
- Load distribution across multiple NFS gateways.
- Failover between hosts in the event of a host failure.
HA for CephFS/NFS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.
When an ingress
service is deployed in front of the NFS cluster, the backend NFS-ganesha servers will see the haproxy’s IP address and not the client’s IP address. As a result, if you are restricting client access based on IP address, access restrictions for NFS exports will not work as expected.
If the active NFS server serving a client goes down, the client’s I/Os are interrupted until the replacement for the active NFS server is online and the NFS cluster is active again.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Create the NFS cluster with the
--ingress
flag:Syntax
ceph nfs cluster create CLUSTER_ID [PLACEMENT] [--port PORT_NUMBER] [--ingress --virtual-ip IP_ADDRESS/CIDR_PREFIX]
- Replace CLUSTER_ID with a unique string to name the NFS Ganesha cluster.
- Replace PLACEMENT with the number of NFS servers to deploy and the host or hosts that you want to deploy the NFS Ganesha daemon containers on.
Use the
--port
PORT_NUMBER flag to deploy NFS on a port other than the default port of 12049.NoteWith ingress mode, the high-availability proxy takes port 2049 and NFS services are deployed on 12049.
-
The
--ingress
flag combined with the--virtual-ip
flag, deploys NFS with a high-availability front-end (virtual IP and load balancer). Replace
--virtual-ip
IP_ADDRESS with an IP address to provide a known, stable NFS endpoint that all clients can use to mount NFS exports. The--virtual-ip
must include a CIDR prefix length. The virtual IP will normally be configured on the first identified network interface that has an existing IP in the same subnet.NoteThe number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter. In the below example, one active NFS server is requested and two hosts are allocated.Example
[ceph: root@host01 /]# ceph nfs cluster create mycephnfs "1 host02 host03" --ingress --virtual-ip 10.10.128.75/22
NoteDeployment of NFS daemons and the ingress service is asynchronous and the command might return before the services have completely started.
Check that the services have successfully started:
Syntax
ceph orch ls --service_name=nfs.CLUSTER_ID ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=nfs.mycephnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT nfs.mycephnfs ?:12049 2/2 0s ago 20s host02;host03 [ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.mycephnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.mycephnfs 10.10.128.75:2049,9049 4/4 46s ago 73s count:2
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info mycephnfs { "mycephnfs": { "virtual_ip": "10.10.128.75", "backend": [ { "hostname": "host02", "ip": "10.10.128.69", "port": 12049 }, { "hostname": "host03", "ip": "10.10.128.70", "port": 12049 } ], "port": 2049, "monitor_port": 9049 } }
List the hosts and processes:
Example
[ceph: root@host01 /]# ceph orch ps | grep nfs haproxy.nfs.cephnfs.host01.rftylv host01 *:2049,9000 running (11m) 10m ago 11m 23.2M - 2.2.19-7ea3822 5e6a41d77b38 f8cc61dc827e haproxy.nfs.cephnfs.host02.zhtded host02 *:2049,9000 running (11m) 53s ago 11m 21.3M - 2.2.19-7ea3822 5e6a41d77b38 4cad324e0e23 keepalived.nfs.cephnfs.host01.zktmsk host01 running (11m) 10m ago 11m 2349k - 2.1.5 18fa163ab18f 66bf39784993 keepalived.nfs.cephnfs.host02.vyycvp host02 running (11m) 53s ago 11m 2349k - 2.1.5 18fa163ab18f 1ecc95a568b4 nfs.cephnfs.0.0.host02.fescmw host02 *:12049 running (14m) 3m ago 14m 76.9M - 3.5 cef6e7959b0a bb0e4ee9484e nfs.cephnfs.1.0.host03.avaddf host03 *:12049 running (14m) 3m ago 14m 74.3M - 3.5 cef6e7959b0a ea02c0c50749
Additional resources
- For information about mounting NFS exports on client hosts, see the Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide.
11.5. Upgrading a standalone CephFS/NFS cluster for HA
As a storage administrator, you can upgrade a standalone storage cluster to a high-availability (HA) cluster by deploying the ingress
service on an existing NFS service.
Prerequisites
- A running Red Hat Ceph Storage cluster with an existing NFS service.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
List existing NFS clusters:
Example
[ceph: root@host01 /]# ceph nfs cluster ls mynfs
NoteIf a standalone NFS cluster is created on one node, you need to increase it to two or more nodes for HA. To increase the NFS service, edit the
nfs.yaml
file and increase the placements with the same port number.The number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter.Syntax
service_type: nfs service_id: SERVICE_ID placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count: COUNT spec: port: PORT_NUMBER
Example
service_type: nfs service_id: mynfs placement: hosts: - host02 - host03 count: 1 spec: port: 12345
In this example, the existing NFS service is running on port
12345
and an additional node is added to the NFS cluster with the same port.Apply the
nfs.yaml
service specification changes to upgrade to a two node NFS service:Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
Edit the
ingress.yaml
specification file with the existing NFS cluster ID:Syntax
service_type: SERVICE_TYPE service_id: SERVICE_ID placement: count: PLACEMENT spec: backend_service: SERVICE_ID_BACKEND 1 frontend_port: FRONTEND_PORT monitor_port: MONITOR_PORT 2 virtual_ip: VIRTUAL_IP_WITH_CIDR
Example
service_type: ingress service_id: nfs.mynfs placement: count: 2 spec: backend_service: nfs.mynfs frontend_port: 2049 monitor_port: 9000 virtual_ip: 10.10.128.75/22
Deploy the ingress service:
Example
[ceph: root@host01 /]# ceph orch apply -i ingress.yaml
NoteDeployment of NFS daemons and the ingress service is asynchronous and the command might return before the services have completely started.
Check that the ingress services have successfully started:
Syntax
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.mynfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.mynfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info mynfs { "mynfs": { "virtual_ip": "10.10.128.75", "backend": [ { "hostname": "host02", "ip": "10.10.128.69", "port": 12049 }, { "hostname": "host03", "ip": "10.10.128.70", "port": 12049 } ], "port": 2049, "monitor_port": 9049 } }
List the hosts and processes:
Example
[ceph: root@host01 /]# ceph orch ps | grep nfs haproxy.nfs.mynfs.host01.ruyyhq host01 *:2049,9000 running (27m) 6m ago 34m 9.85M - 2.2.19-7ea3822 5e6a41d77b38 328d27b3f706 haproxy.nfs.mynfs.host02.ctrhha host02 *:2049,9000 running (34m) 6m ago 34m 4944k - 2.2.19-7ea3822 5e6a41d77b38 4f4440dbfde9 keepalived.nfs.mynfs.host01.fqgjxd host01 running (27m) 6m ago 34m 31.2M - 2.1.5 18fa163ab18f 0e22b2b101df keepalived.nfs.mynfs.host02.fqzkxb host02 running (34m) 6m ago 34m 17.5M - 2.1.5 18fa163ab18f c1e3cc074cf8 nfs.mynfs.0.0.host02.emoaut host02 *:12345 running (37m) 6m ago 37m 82.7M - 3.5 91322de4f795 2d00faaa2ae5 nfs.mynfs.1.0.host03.nsxcfd host03 *:12345 running (37m) 6m ago 37m 81.1M - 3.5 91322de4f795 d4bda4074f17
Additional resources
- For information about mounting NFS exports on client hosts, see the Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide.
11.6. Deploying HA for CephFS/NFS using a specification file
You can deploy HA for CephFS/NFS using a specification file by first deploying an NFS service and then deploying ingress
to the same NFS service.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All the manager, monitor, and OSD daemons are deployed.
- Ensure the NFS module is enabled.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Ensure the NFS module is enabled:
Example
[ceph: root@host01 /]# ceph mgr module ls | more
Exit out of the Cephadm shell and create the
nfs.yaml
file:Example
[root@host01 ~]# touch nfs.yaml
Edit the
nfs.yaml
file to include the following details:Syntax
service_type: nfs service_id: SERVICE_ID placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count: COUNT spec: port: PORT_NUMBER
NoteThe number of hosts you allocate for the NFS service must be greater than the number of active NFS servers you request to deploy, specified by the
placement: count
parameter.Example
service_type: nfs service_id: cephfsnfs placement: hosts: - host02 - host03 count: 1 spec: port: 12345
In this example, the server is run on the non-default port of
12345
, instead of the default port of2049
, on host02 and host03.Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml
Log into the Cephadm shell and navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy the NFS service using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
Example
[ceph: root@host01 ceph]# ceph orch apply -i nfs.yaml
NoteDeployment of the NFS service is asynchronous and the command might return before the services have completely started.
Check that the NFS services have successfully started:
Syntax
ceph orch ls --service_name=nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT nfs.cephfsnfs ?:12345 2/2 3m ago 13m host02;host03
Exit out of the Cephadm shell and create the
ingress.yaml
file:Example
[root@host01 ~]# touch ingress.yaml
Edit the
ingress.yaml
file to include the following details:Syntax
service_type: SERVICE_TYPE service_id: SERVICE_ID placement: count: PLACEMENT spec: backend_service: SERVICE_ID_BACKEND frontend_port: FRONTEND_PORT monitor_port: MONITOR_PORT virtual_ip: VIRTUAL_IP_WITH_CIDR
Example
service_type: ingress service_id: nfs.cephfsnfs placement: count: 2 spec: backend_service: nfs.cephfsnfs frontend_port: 2049 monitor_port: 9000 virtual_ip: 10.10.128.75/22
NoteIn this example,
placement: count: 2
deploys thekeepalived
andhaproxy
service on random nodes. To specify the nodes to deploykeepalived
andhaproxy
on, use theplacement: hosts
option:Example
service_type: ingress service_id: nfs.cephfsnfs placement: hosts: - host02 - host03
Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount ingress.yaml:/var/lib/ceph/ingress.yaml
Log into the Cephadm shell and navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy the
ingress
service using service specification:Syntax
ceph orch apply -i FILE_NAME.yaml
Example
[ceph: root@host01 ceph]# ceph orch apply -i ingress.yaml
Check that the ingress services have successfully started:
Syntax
ceph orch ls --service_name=ingress.nfs.CLUSTER_ID
Example
[ceph: root@host01 /]# ceph orch ls --service_name=ingress.nfs.cephfsnfs NAME PORTS RUNNING REFRESHED AGE PLACEMENT ingress.nfs.cephfsnfs 10.10.128.75:2049,9000 4/4 4m ago 22m count:2
Verification
View the IP endpoints, IPs for the individual NFS daemons, and the virtual IP for the
ingress
service:Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info cephfsnfs { "cephfsnfs": { "virtual_ip": "10.10.128.75", "backend": [ { "hostname": "host02", "ip": "10.10.128.69", "port": 12345 }, { "hostname": "host03", "ip": "10.10.128.70", "port": 12345 } ], "port": 2049, "monitor_port": 9049 } }
List the hosts and processes:
Example
[ceph: root@host01 /]# ceph orch ps | grep nfs haproxy.nfs.cephfsnfs.host01.ruyyhq host01 *:2049,9000 running (27m) 6m ago 34m 9.85M - 2.2.19-7ea3822 5e6a41d77b38 328d27b3f706 haproxy.nfs.cephfsnfs.host02.ctrhha host02 *:2049,9000 running (34m) 6m ago 34m 4944k - 2.2.19-7ea3822 5e6a41d77b38 4f4440dbfde9 keepalived.nfs.cephfsnfs.host01.fqgjxd host01 running (27m) 6m ago 34m 31.2M - 2.1.5 18fa163ab18f 0e22b2b101df keepalived.nfs.cephfsnfs.host02.fqzkxb host02 running (34m) 6m ago 34m 17.5M - 2.1.5 18fa163ab18f c1e3cc074cf8 nfs.cephfsnfs.0.0.host02.emoaut host02 *:12345 running (37m) 6m ago 37m 82.7M - 3.5 91322de4f795 2d00faaa2ae5 nfs.cephfsnfs.1.0.host03.nsxcfd host03 *:12345 running (37m) 6m ago 37m 81.1M - 3.5 91322de4f795 d4bda4074f17
Additional resources
- For information about mounting NFS exports on client hosts, see the Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System Guide.
11.7. Updating the NFS-Ganesha cluster using the Ceph Orchestrator
You can update the NFS-Ganesha cluster by changing the placement of the daemons on the hosts using the Ceph Orchestrator with Cephadm in the backend.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Update the cluster:
Syntax
ceph orch apply nfs CLUSTER_NAME ["HOST_NAME_1,HOST_NAME_2,HOST_NAME_3"]
The CLUSTER_NAME is an arbitrary string, HOST_NAME_1 is an optional string signifying the hosts to update the deployed NFS-Ganesha daemons.
Example
[ceph: root@host01 /]# ceph orch apply nfs nfsganesha "host02"
This updates the
nfsganesha
cluster onhost02
.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
Show NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Additional Resources
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.8. Viewing the NFS-Ganesha cluster information using the Ceph Orchestrator
You can view the information of the NFS-Ganesha cluster using the Ceph Orchestrator. You can get the information about all the NFS Ganesha clusters or specific clusters with their port, IP address and the name of the hosts on which the cluster is created.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
View the NFS-Ganesha cluster information:
Syntax
ceph nfs cluster info CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster info nfsganesha { "nfsganesha": [ { "hostname": "host02", "ip": [ "10.10.128.70" ], "port": 2049 } ] }
Additional Resources
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.9. Fetching the NFS-Ganesha cluster logs using the Ceph Orchestrator
With the Ceph Orchestrator, you can fetch the NFS-Ganesha cluster logs. You need to be on the node where the service is deployed.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Cephadm installed on the nodes where NFS is deployed.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
As a root user, fetch the FSID of the storage cluster:
Example
[root@host03 ~]# cephadm ls
Copy the FSID and the name of the service.
Fetch the logs:
Syntax
cephadm logs --fsid FSID --name SERVICE_NAME
Example
[root@host03 ~]# cephadm logs --fsid 499829b4-832f-11eb-8d6d-001a4a000635 --name nfs.foo.host03
Additional Resources
- See Deploying the Ceph daemons using the placement specification section in the Red Hat Ceph Storage Operations Guide for more information.
11.10. Setting custom NFS-Ganesha configuration using the Ceph Orchestrator
The NFS-Ganesha cluster is defined in default configuration blocks. Using Ceph Orchestrator you can customize the configuration and that will have precedence over the default configuration blocks.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
The following is an example of the default configuration of NFS-Ganesha cluster:
Example
# {{ cephadm_managed }} NFS_CORE_PARAM { Enable_NLM = false; Enable_RQUOTA = false; Protocols = 4; } MDCACHE { Dir_Chunk = 0; } EXPORT_DEFAULTS { Attr_Expiration_Time = 0; } NFSv4 { Delegations = false; RecoveryBackend = 'rados_cluster'; Minor_Versions = 1, 2; } RADOS_KV { UserId = "{{ user }}"; nodeid = "{{ nodeid}}"; pool = "{{ pool }}"; namespace = "{{ namespace }}"; } RADOS_URLS { UserId = "{{ user }}"; watch_url = "{{ url }}"; } RGW { cluster = "ceph"; name = "client.{{ rgw_user }}"; } %url {{ url }}
Customize the NFS-Ganesha cluster configuration. The following are two examples for customizing the configuration:
Change the log level:
Example
LOG { COMPONENTS { ALL = FULL_DEBUG; } }
Add custom export block:
Create the user.
NoteUser specified in FSAL blocks should have proper caps for NFS-Ganesha daemons to access the Ceph cluster.
Syntax
ceph auth get-or-create client.USER_ID mon 'allow r' osd 'allow rw pool=.nfs namespace=NFS_CLUSTER_NAME, allow rw tag cephfs data=FS_NAME' mds 'allow rw path=EXPORT_PATH'
Example
[ceph: root@host01 /]# ceph auth get-or-create client.f64f341c-655d-11eb-8778-fa163e914bcc mon 'allow r' osd 'allow rw pool=.nfs namespace=nfs_cluster_name, allow rw tag cephfs data=fs_name' mds 'allow rw path=export_path'
Navigate to the following directory:
Syntax
cd /var/lib/ceph/DAEMON_PATH/
Example
[ceph: root@host01 /]# cd /var/lib/ceph/nfs/
If the
nfs
directory does not exist, create a directory in the path.Create a new configuration file:
Syntax
touch PATH_TO_CONFIG_FILE
Example
[ceph: root@host01 nfs]# touch nfs-ganesha.conf
Edit the configuration file by adding the custom export block. It creates a single export and that is managed by the Ceph NFS export interface.
Syntax
EXPORT { Export_Id = NUMERICAL_ID; Transports = TCP; Path = PATH_WITHIN_CEPHFS; Pseudo = BINDING; Protocols = 4; Access_Type = PERMISSIONS; Attr_Expiration_Time = 0; Squash = None; FSAL { Name = CEPH; Filesystem = "FILE_SYSTEM_NAME"; User_Id = "USER_NAME"; Secret_Access_Key = "USER_SECRET_KEY"; } }
Example
EXPORT { Export_Id = 100; Transports = TCP; Path = /; Pseudo = /ceph/; Protocols = 4; Access_Type = RW; Attr_Expiration_Time = 0; Squash = None; FSAL { Name = CEPH; Filesystem = "filesystem name"; User_Id = "user id"; Secret_Access_Key = "secret key"; } }
Apply the new configuration the cluster:
Syntax
ceph nfs cluster config set CLUSTER_NAME -i PATH_TO_CONFIG_FILE
Example
[ceph: root@host01 nfs]# ceph nfs cluster config set nfs-ganesha -i /var/lib/ceph/nfs/nfs-ganesha.conf
This also restarts the service for the custom configuration.
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Verify the custom configuration:
Syntax
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
Example
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
Additional Resources
- See the Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.11. Resetting custom NFS-Ganesha configuration using the Ceph Orchestrator
Using the Ceph Orchestrator, you can reset the user-defined configuration to the default configuration.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha deployed using the
mgr/nfs
module. - Custom NFS cluster configuration is set-up
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Reset the NFS-Ganesha configuration:
Syntax
ceph nfs cluster config reset CLUSTER_NAME
Example
[ceph: root@host01 /]# ceph nfs cluster config reset nfs-cephfs
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=nfs
Verify the custom configuration is deleted:
Syntax
/bin/rados -p POOL_NAME -N CLUSTER_NAME get userconf-nfs.CLUSTER_NAME -
Example
[ceph: root@host01 /]# /bin/rados -p nfs-ganesha -N nfsganesha get userconf-nfs.nfsganesha -
Additional Resources
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
- See the Setting custom NFS-Ganesha configuration using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information..
11.12. Deleting the NFS-Ganesha cluster using the Ceph Orchestrator
You can use the Ceph Orchestrator with Cephadm in the backend to delete the NFS-Ganesha cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
- All manager, monitor and OSD daemons are deployed.
-
NFS-Ganesha cluster created using the
mgr/nfs
module.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Delete the cluster:
Syntax
ceph nfs cluster rm CLUSTER_NAME
The CLUSTER_NAME is an arbitrary string.
Example
[ceph: root@host01 /]# ceph nfs cluster rm nfsganesha NFS Cluster Deleted Successfully
NoteThe
delete
option is deprecated and you need to userm
to delete an NFS cluster.
Verification
List the cluster details:
Example
[ceph: root@host01 /]# ceph nfs cluster ls
Additional Resources
- See Exporting Ceph File System namespaces over the NFS protocol section in the Red Hat Ceph Storage File System guide for more information.
- See Creating the NFS-Ganesha cluster using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
11.13. Removing the NFS-Ganesha gateway using the Ceph Orchestrator
You can remove the NFS-Ganesha gateway using the ceph orch rm
command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the cluster.
- At least one NFS-Ganesha gateway deployed on the hosts.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
Remove the service:
Syntax
ceph orch rm SERVICE_NAME
Example
[ceph: root@host01 /]# ceph orch rm nfs.foo
Verification
List the hosts, daemons, and processes:
Syntax
ceph orch ps
Example
[ceph: root@host01 /]# ceph orch ps
Additional Resources
- See Deploying the Ceph daemons using the service specification section in the Red Hat Ceph Storage Operations Guide for more information.
- See Deploying the NFS-Ganesha gateway using the service specification section in the Red Hat Ceph Storage Operations Guide for more information.
11.14. Kerberos integration
Kerberos is a computer network security protocol that provides a centralized authentication server which authenticates users to servers and vice versa across an untrusted network. In Kerberos Authentication, the server and database is used for client authentication.
11.14.1. Setting up the KDC (as per requirement)
Kerberos runs as a third-party trusted server known as the Key Distribution Center (KDC) in which each user and service on the network is a principal. The KDC holds information about all its clients (user principals, service principals); hence, it needs to be secure. In a Kerberos setup, as the KDC is a single point of failure, it is recommended to have one master KDC and multiple slave KDCs.
Prerequisites
Verify if the following changes are done in the /etc/hosts
file. Add domain names if required.
[root@chost ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.208.97 ceph-node1-installer.ibm.com ceph-node1-installer 10.0.210.243 ceph-node2.ibm.com ceph-node2 10.0.208.63 ceph-node3.ibm.com ceph-node3 10.0.210.222 ceph-node4.ibm.com ceph-node4 10.0.210.235 ceph-node5.ibm.com ceph-node5 10.0.209.87 ceph-node6.ibm.com ceph-node6 10.0.208.89 ceph-node7.ibm.com ceph-node7
Ensure that domain name is present for all the involved nodes in the setup (all nodes in Ceph cluster and all NFS client nodes).
Procedure
Follow the below steps to install and configure KDC. Skip this part if you already have KDC installed and configured.
Check if the required RPMs are installed on the machine where you want to setup the KDC.
[root@host ~]# rpm -qa | grep krb5 krb5-libs-1.20.1-9.el9_2.x86_64 krb5-pkinit-1.20.1-9.el9_2.x86_64 krb5-server-1.20.1-9.el9_2.x86_64 krb5-server-ldap-1.20.1-9.el9_2.x86_64 krb5-devel-1.20.1-9.el9_2.x86_64 krb5-workstation-1.20.1-9.el9_2.x86_64
Note- It is better to have domain name in accordance with Kerberos REALM name. For example, Realm - PUNE.IBM.COM, Admin principal - admin/admin.
- Edit the installed configuration files to reflect the new KDC. Note that the KDC can be provided as either IP address or DNS name.
Update the
krb5.conf
file:NoteYou need to update all the realms (
default_realm
anddomain_realm
) with thekdc
andadmin_server
IP in thekrb5.conf
file.[root@host ~]# cat /etc/krb5.conf # To opt out of the system crypto-policies configuration of krb5, remove the # symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated. includedir /etc/krb5.conf.d/ [logging] default = [FILE:/var/log/krb5libs.log](file:///var/log/krb5libs.log) kdc = [FILE:/var/log/krb5kdc.log](file:///var/log/krb5kdc.log) admin_server = [FILE:/var/log/kadmind.log](file:///var/log/kadmind.log) [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = [FILE:/etc/pki/tls/certs/ca-bundle.crt](file:///etc/pki/tls/certs/ca-bundle.crt) spake_preauth_groups = edwards25519 dns_canonicalize_hostname = fallback qualify_shortname = "" default_realm = PUNE.IBM.COM default_ccache_name = KEYRING:persistent:%{uid} [realms] PUNE.IBM.COM = { kdc = 10.0.210.222:88 admin_server = 10.0.210.222:749 } [domain_realm] .redhat.com = PUNE.IBM.COM redhat.com = PUNE.IBM.COM
Update the
krb5.conf
file:NoteYou need to update the realms in the
kdc.conf
file.[root@host ~]# cat /var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 spake_preauth_kdc_challenge = edwards25519 [realms] PUNE.IBM.COM = { master_key_type = aes256-cts-hmac-sha384-192 acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words default_principal_flags = +preauth admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts-hmac-sha384-192:normal aes128-cts-hmac-sha256-128:normal aes256-cts-hmac-sha1-96:normal aes128-cts-hmac-sha1-96:normal camellia256-cts-cmac:normal camellia128-cts-cmac:normal arcfour-hmac-md5:normal # Supported encryption types for FIPS mode: #supported_enctypes = aes256-cts-hmac-sha384-192:normal aes128-cts-hmac-sha256-128:normal }
Create the KDC database using
kdb5_util
:NoteEnsure that the host name can be resolved via either
DNS
or/etc/hosts
.[root@host ~]# kdb5_util create -s -r [PUNE.IBM.COM](http://pune.ibm.com/) Initializing database '/var/kerberos/krb5kdc/principal' for realm 'PUNE.IBM.COM', master key name 'K/M@PUNE.IBM.COM' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: Re-enter KDC database master key to verify:
Add administrators to the ACL file:
[root@host ~]# cat /var/kerberos/krb5kdc/kadm5.acl */admin@PUNE.IBM.COM *
The output indicates that any principal in the PUNE.IBM.COM realm with an admin instance has all administrative privileges.
Add administrators to the Kerberos database:
[root@host ~]# kadmin.local Authenticating as principal root/admin@PUNE.IBM.COM with password. kadmin.local: addprinc admin/admin@PUNE.IBM.COM No policy specified for admin/admin@PUNE.IBM.COM; defaulting to no policy Enter password for principal "admin/admin@PUNE.IBM.COM": Re-enter password for principal "admin/admin@PUNE.IBM.COM": Principal "admin/admin@PUNE.IBM.COM" created. kadmin.local:
Start
kdc
andkadmind
:# krb5kdc # kadmind
Verification
Check if
kdc
andkadmind
are running properly:# ps -eaf | grep krb root 27836 1 0 07:35 ? 00:00:00 krb5kdc root 27846 13956 0 07:35 pts/8 00:00:00 grep --color=auto krb # ps -eaf | grep kad root 27841 1 0 07:35 ? 00:00:00 kadmind root 27851 13956 0 07:36 pts/8 00:00:00 grep --color=auto kad
Check if the setup is correct:
[root@host ~]# kinit admin/admin Password for admin/admin@PUNE.IBM.COM: [root@ceph-mani-o7fdxp-node4 ~]# klist Ticket cache: KCM:0 Default principal: admin/admin@PUNE.IBM.COM Valid starting Expires Service principal 10/25/23 06:37:08 10/26/23 06:37:01 krbtgt/PUNE.IBM.COM@PUNE.IBM.COM renew until 10/25/23 06:37:08
11.14.2. Setting up the Kerberos client
The Kerberos client machine should be time synced with KDC. Ensure to sync the KDC and clients by using NTP. Time difference of five minutes or more leads to Kerberos authentication failure and throws a clock skew error. This step is a prerequisite on all the systems which are going to participate in Kerberos authentication like NFS clients, hosts where NFS Ganesha containers are going to run.
Procedure
Check the required RPMs
[root@host ~]# rpm -qa | grep krb5 krb5-libs-1.20.1-9.el9_2.x86_64 krb5-pkinit-1.20.1-9.el9_2.x86_64 krb5-workstation-1.20.1-9.el9_2.x86_64
Update the
krb5.conf
file similar to the one on KDC server:[root@host ~]# cat /etc/krb5.conf # To opt out of the system crypto-policies configuration of krb5, remove the # symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated. includedir /etc/krb5.conf.d/ [logging] default = [FILE:/var/log/krb5libs.log](file:///var/log/krb5libs.log) kdc = [FILE:/var/log/krb5kdc.log](file:///var/log/krb5kdc.log) admin_server = [FILE:/var/log/kadmind.log](file:///var/log/kadmind.log) [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false pkinit_anchors = [FILE:/etc/pki/tls/certs/ca-bundle.crt](file:///etc/pki/tls/certs/ca-bundle.crt) spake_preauth_groups = edwards25519 dns_canonicalize_hostname = fallback qualify_shortname = "" default_realm = PUNE.IBM.COM default_ccache_name = KEYRING:persistent:%{uid} [realms] PUNE.IBM.COM = { kdc = 10.0.210.222:88 admin_server = 10.0.210.222:749 } [domain_realm] .IBM.com = PUNE.IBM.COM IBM.com = PUNE.IBM.COM
Verification
Validate the client settings:
[root@host ~]# kinit admin/admin Password for admin/admin@PUNE.IBM.COM: [root@ceph-mani-o7fdxp-node5 ~]# klist Ticket cache: KCM:0 Default principal: admin/admin@PUNE.IBM.COM Valid starting Expires Service principal 10/25/23 08:49:12 10/26/23 08:49:08 krbtgt/PUNE.IBM.COM@PUNE.IBM.COM renew until 10/25/23 08:49:12
11.14.3. NFS specific Kerberos setup
You need to create service principals for both NFS server and client. The respective keys are stored in the keytab files. These principals are required to set up the initial security context required by GSS_RPCSEC. These service principals have format like nfs/@REALM. You can copy the /etc/krb5.conf file from the working system to the Ceph nodes.
Procedure
Create service principal for that host:
[root@host ~]# kinit admin/admin Password for admin/admin@PUNE.IBM.COM: [root@host ~]# kadmin Authenticating as principal admin/admin@PUNE.IBM.COM with password. Password for admin/admin@PUNE.IBM.COM: kadmin: addprinc -randkey nfs/<hostname>.ibm.com No policy specified for nfs/<hostname>.ibm.com@PUNE.IBM.COM; defaulting to no policy Principal "nfs/<hostname>.ibm.com@PUNE.IBM.COM" created.
Add the key to the keytab file:
NoteDuring this step, you are already on the NFS server and using kadmin interface. Here, the keytab operations reflect on the keytab of the NFS server.
kadmin: ktadd nfs/<hostname>.ibm.com Entry for principal nfs/<hostname>.ibm.com with kvno 2, encryption type aes256-cts-hmac-sha384-192 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com with kvno 2, encryption type aes128-cts-hmac-sha256-128 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com with kvno 2, encryption type camellia256-cts-cmac added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com with kvno 2, encryption type camellia128-cts-cmac added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com with kvno 2, encryption type arcfour-hmac added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). kadmin:
- Run steps 1 and 2 on all the Ceph nodes where NFS Ganesha containers are running and all the NFS clients.
11.14.4. NFS Ganesha container settings
Follow the below steps to configure the NFS Ganesha settings in the Ceph environment.
Procedure
Retrieve the existing NFS Ganesha container configuration:
[ceph: root@host /]# ceph orch ls --service-type nfs --export service_type: nfs service_id: c_ganesha service_name: nfs.c_ganesha placement: hosts: - host1 - host2 - host3 spec: port: 2049
Modify the container configuration to pass on
/etc/krb5.conf
and/etc/krb5.keytab`
files to the container from the host. These files will be referred by NFS Ganesha at runtime to validate incoming service tickets and secure the communication between Ganesha and NFS client (krb5p).[root@host ~]# cat nfs.yaml service_type: nfs service_id: c_ganesha service_name: nfs.c_ganesha placement: hosts: - host1 - host2 - host3 spec: port: 2049 extra_container_args: - "-v" - "/etc/krb5.keytab:/etc/krb5.keytab:ro" - "-v" - "/etc/krb5.conf:/etc/krb5.conf:ro"
Make the modified nfs.yaml file available inside the cephadm shell:
[root@host ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml Inferring fsid ff1c1498-73ec-11ee-af38-fa163e9a17fd Inferring config /var/lib/ceph/ff1c1498-73ec-11ee-af38-fa163e9a17fd/mon.ceph-msaini-qp49z7-node1-installer/config Using ceph image with id 'fada497f9c5f' and tag 'ceph-7.0-rhel-9-containers-candidate-73711-20231018030025' created on 2023-10-18 03:03:39 +0000 UTC registry-proxy.engineering.ibm.com/rh-osbs/rhceph@sha256:e66e5dd79d021f3204a183f5dbe4537d0c0e4b466df3b2cc4d50cc79c0f34d75
Validate whether the file has the required changes:
[ceph: root@host /]# cat /var/lib/ceph/nfs.yaml service_type: nfs service_id: c_ganesha service_name: nfs.c_ganesha placement: hosts: - host1 - host2 - host3 spec: port: 2049 extra_container_args: - "-v" - "/etc/krb5.keytab:/etc/krb5.keytab:ro" - "-v" - "/etc/krb5.conf:/etc/krb5.conf:ro"
Apply the required changes to NFS Ganesha container and redeploy the container:
[ceph: root@host /]# ceph orch apply -i /var/lib/ceph/nfs.yaml Scheduled nfs.c_ganesha update... [ceph: root@ceph-msaini-qp49z7-node1-installer /]# ceph orch redeploy nfs.c_ganesha Scheduled to redeploy nfs.c_ganesha.1.0.ceph-msaini-qp49z7-node1-installer.sxzuts on host 'ceph-msaini-qp49z7-node1-installer' Scheduled to redeploy nfs.c_ganesha.2.0.ceph-msaini-qp49z7-node2.psuvki on host 'ceph-msaini-qp49z7-node2' Scheduled to redeploy nfs.c_ganesha.0.0.ceph-msaini-qp49z7-node3.qizzvk on host 'ceph-msaini-qp49z7-node3'
Validate whether the redeployed service has the required changes:
[ceph: root@host /]# ceph orch ls --service-type nfs --export service_type: nfs service_id: c_ganesha service_name: nfs.c_ganesha placement: hosts: - ceph-msaini-qp49z7-node1-installer - ceph-msaini-qp49z7-node2 - ceph-msaini-qp49z7-node3 extra_container_args: - -v - /etc/krb5.keytab:/etc/krb5.keytab:ro - -v - /etc/krb5.conf:/etc/krb5.conf:ro spec: port: 2049
Modify the export definition to have
krb5* (krb5, krb5i, krb5p)
security flavor:NoteYou can create such an export after completing the above setup.
[ceph: root@host /]# ceph nfs export info c_ganesha /exp1 { "access_type": "RW", "clients": [], "cluster_id": "c_ganesha", "export_id": 1, "fsal": { "fs_name": "fs1", "name": "CEPH", "user_id": "nfs.c_ganesha.1" }, "path": "/volumes/_nogroup/exp1/81f9a67e-ddf1-4b5a-9fe0-d87effc7ca16", "protocols": [ 4 ], "pseudo": "/exp1", "sectype": [ "krb5" ], "security_label": true, "squash": "none", "transports": [ "TCP" ] }
11.14.5. NFS Client side actions
Below are some of the operations that NFS clients can undertake.
Procedure
Create service principal:
kadmin: addprinc -randkey nfs/<hostname>.ibm.com@PUNE.IBM.COM No policy specified for nfs/<hostname>.ibm.com@PUNE.IBM.COM; defaulting to no policy Principal "nfs/<hostname>.ibm.com@PUNE.IBM.COM" created. kadmin: ktadd nfs/<hostname>.ibm.com@PUNE.IBM.COM Entry for principal nfs/<hostname>.ibm.com@PUNE.IBM.COM with kvno 2, encryption type aes256-cts-hmac-sha384-192 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com@PUNE.IBM.COM with kvno 2, encryption type aes128-cts-hmac-sha256-128 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com@PUNE.IBM.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com@PUNE.IBM.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com@PUNE.IBM.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com@PUNE.IBM.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab). Entry for principal nfs/<hostname>.ibm.com@PUNE.IBM.COM with kvno 2, encryption type arcfour-hmac added to keytab [FILE:/etc/krb5.keytab](file:///etc/krb5.keytab).
Restart
rpc.gssd
service to take effect of the modified/new keytab file:# systemctl restart rpc-gssd
Mount the NFS export:
Syntax
[root@host ~]# mount -t nfs -o vers=4.1,port=2049 <IP>:/<export_name> >mount_point>
Example
mount -t nfs -o vers=4.1,port=2049 10.8.128.233:/ganesha /mnt/test/
- Create users. Once the NFS export is mounted, regular users are used to work with mounted exports. These regular users (generally local users on the system or users from centralized system like LDAP/AD) need to be part of Kerberos setup. Based on the kind of setup, local users need to be created in KDC as well.
11.14.6. Validating the setup
Follow the below steps to validate the setup.
Procedure
Access the export as normal user, without Kerberos tickets:
[user@host ~]$ klist klist: Credentials cache 'KCM:1001' not found [user@host ~]$ cd /mnt -bash: cd: /mnt: Permission denied
Access the export as normal user, with Kerberos tickets:
[user@host ~]$ kinit sachin Password for user@PUNE.IBM.COM: [user@host ~]$ klist Ticket cache: KCM:1001 Default principal: user@PUNE.IBM.COM Valid starting Expires Service principal 10/27/23 12:57:21 10/28/23 12:57:17 krbtgt/PUNE.IBM.COM@PUNE.IBM.COM renew until 10/27/23 12:57:21 [user@host ~]$ cd /mnt [user@host mnt]$ klist Ticket cache: KCM:1001 Default principal: user@PUNE.IBM.COM Valid starting Expires Service principal 10/27/23 12:57:21 10/28/23 12:57:17 krbtgt/PUNE.IBM.COM@PUNE.IBM.COM renew until 10/27/23 12:57:21 10/27/23 12:57:28 10/28/23 12:57:17 nfs/ceph-msaini-qp49z7-node1-installer.ibm.com@ renew until 10/27/23 12:57:21 Ticket server: nfs/ceph-msaini-qp49z7-node1-installer.ibm.com@PUNE.IBM.COM
Note: Tickets for nfs/
service are observed on the client.