Chapter 13. CephFS NFS post-deployment configuration and verification
You must complete two post-deployment configuration tasks before you create NFS shares, grant user access, and mount NFS shares.
- Map the Networking service (neutron) StorageNFS network to the isolated data center Storage NFS network. You can omit this option if you do not want to isolate NFS traffic to a separate network.
- Create the default share type.
After you complete these steps, the tenant compute instances can create, allow access to, and mount NFS shares.
When you deploy CephFS-NFS as a back end of the Shared File Systems service (manila), you add the following new elements to the overcloud environment:
- StorageNFS network
- Ceph MDS service on the controllers
- NFS-Ganesha service on the controllers
As the cloud administrator, you must verify the stability of the CephFS-NFS environment before you make it available to service users.
13.1. Creating the storage provider network
You must map the new isolated StorageNFS network to a Networking (neutron) provider network. The Compute VMs attach to the network to access share export locations that are provided by the NFS-Ganesha gateway.
For information about network security with the Shared File Systems service (manila), see Hardening the Shared File Systems Service in Hardening Red Hat OpenStack Platform.
Procedure
The openstack network create
command defines the configuration for the StorageNFS neutron network.
Source the overcloud credentials file:
$ source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
On an undercloud node, create the StorageNFS network:
(overcloud) [stack@undercloud-0 ~]$ openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70
You can enter this command with the following options:
-
For the
--provider-physical-network
option, use the default valuedatacentre
, unless you set another tag for the br-isolated bridge through NeutronBridgeMappings in your tripleo-heat-templates. -
For the
--provider-segment
option, use the VLAN value set for the StorageNFS isolated network in the heat template,/usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml
. This value is 70, unless the deployer modified the isolated network definitions. -
For the
--provider-network-type
option, use the valuevlan
.
-
For the
13.4. Verifying creation of isolated StorageNFS network
The network_data_ganesha.yaml
file used to deploy CephFS-NFS as a Shared File Systems service back end creates the StorageNFS VLAN. Complete the following steps to verify the existence of the isolated StorageNFS network.
Procedure
- Log in to one of the controllers in the overcloud.
Enter the following command to check the connected networks and verify the existence of the VLAN as set in
network_data_ganesha.yaml
:$ ip a 15: vlan310: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 32:80:cf:0e:11:ca brd ff:ff:ff:ff:ff:ff inet 172.16.4.4/24 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet 172.16.4.7/32 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet6 fe80::3080:cfff:fe0e:11ca/64 scope link valid_lft forever preferred_lft forever
13.5. Verifying Ceph MDS service
Use the systemctl status
command to verify the Ceph MDS service status.
Procedure
Enter the following command on all Controller nodes to check the status of the MDS container:
$ systemctl status ceph-mds<@CONTROLLER-HOST>
Example:
$ systemctl status ceph-mds@controller-0.service ceph-mds@controller-0.service - Ceph MDS Loaded: loaded (/etc/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-09-18 20:11:53 UTC; 6 days ago Main PID: 65066 (conmon) Tasks: 16 (limit: 204320) Memory: 38.2M CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@controller-0.service └─60921 /usr/bin/podman run --rm --net=host --memory=32000m --cpus=4 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro>
13.6. Verifying Ceph cluster status
Complete the following steps to verify Ceph cluster status.
Procedure
- Log in to the active Controller node.
Enter the following command:
$ sudo ceph -s cluster: id: 3369e280-7578-11e8-8ef3-801844eeec7c health: HEALTH_OK services: mon: 3 daemons, quorum overcloud-controller-1,overcloud-controller-2,overcloud-controller-0 mgr: overcloud-controller-1(active), standbys: overcloud-controller-2, overcloud-controller-0 mds: cephfs-1/1/1 up {0=overcloud-controller-0=up:active}, 2 up:standby osd: 6 osds: 6 up, 6 in
There is one active MDS and two MDSs on standby.
To check the status of the Ceph file system in more detail, enter the following command and replace
<cephfs>
with the name of the Ceph file system:$ sudo ceph fs ls name: cephfs, metadata pool: manila_metadata, data pools: [manila_data]
13.8. Verifying manila-api services acknowledges scheduler and share services
Complete the following steps to confirm that the manila-api
service acknowledges the scheduler and share services.
Procedure
- Log in to the undercloud.
Enter the following command:
$ source /home/stack/overcloudrc
Enter the following command to confirm
manila-scheduler
andmanila-share
are enabled:$ manila service-list | Id | Binary | Host | Zone | Status | State | Updated_at | | 2 | manila-scheduler | hostgroup | nova | enabled | up | 2018-08-08T04:15:03.000000 | | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2018-08-08T04:15:03.000000 |