Search

Chapter 13. CephFS NFS post-deployment configuration and verification

download PDF

You must complete two post-deployment configuration tasks before you create NFS shares, grant user access, and mount NFS shares.

  • Map the Networking service (neutron) StorageNFS network to the isolated data center Storage NFS network. You can omit this option if you do not want to isolate NFS traffic to a separate network.
  • Create the default share type.

After you complete these steps, the tenant compute instances can create, allow access to, and mount NFS shares.

When you deploy CephFS-NFS as a back end of the Shared File Systems service (manila), you add the following new elements to the overcloud environment:

  • StorageNFS network
  • Ceph MDS service on the controllers
  • NFS-Ganesha service on the controllers

As the cloud administrator, you must verify the stability of the CephFS-NFS environment before you make it available to service users.

13.1. Creating the storage provider network

You must map the new isolated StorageNFS network to a Networking (neutron) provider network. The Compute VMs attach to the network to access share export locations that are provided by the NFS-Ganesha gateway.

For information about network security with the Shared File Systems service (manila), see Hardening the Shared File Systems Service in Hardening Red Hat OpenStack Platform.

Procedure

The openstack network create command defines the configuration for the StorageNFS neutron network.

  1. Source the overcloud credentials file:

    $ source ~/<credentials_file>
    • Replace <credentials_file> with the name of your credentials file, for example, overcloudrc.
  2. On an undercloud node, create the StorageNFS network:

    (overcloud) [stack@undercloud-0 ~]$ openstack network create StorageNFS --share  --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70

    You can enter this command with the following options:

    • For the --provider-physical-network option, use the default value datacentre, unless you set another tag for the br-isolated bridge through NeutronBridgeMappings in your tripleo-heat-templates.
    • For the --provider-segment option, use the VLAN value set for the StorageNFS isolated network in the heat template, /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml. This value is 70, unless the deployer modified the isolated network definitions.
    • For the --provider-network-type option, use the value vlan.

13.2. Configuring the shared provider StorageNFS network

Create a corresponding StorageNFSSubnet on the neutron-shared provider network. Ensure that the subnet is the same as the storage_nfs network definition in the network_data.yml file and ensure that the allocation range for the StorageNFS subnet and the corresponding undercloud subnet do not overlap. No gateway is required because the StorageNFS subnet is dedicated to serving NFS shares.

Prerequisites

  • The start and ending IP range for the allocation pool.
  • The subnet IP range.

13.2.1. Configuring the shared provider StorageNFS IPv4 network

Create a corresponding StorageNFSSubnet on the neutron-shared IPv4 provider network.

Procedure

  1. Log in to an overcloud node.
  2. Source your overcloud credentials.
  3. Use the example command to provision the network and make the following updates:

    1. Replace the start=172.17.0.4,end=172.17.0.250 IP values with the IP values for your network.
    2. Replace the 172.17.0.0/20 subnet range with the subnet range for your network.
[stack@undercloud-0 ~]$ openstack subnet create --allocation-pool start=172.17.0.4,end=172.17.0.250 \
--dhcp --network StorageNFS --subnet-range 172.17.0.0/20 \
--gateway none StorageNFSSubnet

13.2.2. Configuring the shared provider StorageNFS IPv6 network

Create a corresponding StorageNFSSubnet on the neutron-shared IPv6 provider network.

Procedure

  1. Log in to an overcloud node.
  2. Use the sample command to provision the network, updating values as needed.

    • Replace the fd00:fd00:fd00:7000::/64 subnet range with the subnet range for your network.
[stack@undercloud-0 ~]$ openstack subnet create --ip-version 6 --dhcp --network StorageNFS --subnet-range fd00:fd00:fd00:7000::/64 --gateway none --ipv6-ra-mode dhcpv6-stateful --ipv6-address-mode dhcpv6-stateful StorageNFSSubnet -f yaml

13.3. Configuring a default share type

You can use the Shared File Systems service (manila) to define share types for the creation of shares with specific settings. Share types work like Block Storage volume types. Each type has associated settings, for example, extra specifications. When you invoke the type during share creation, the settings apply to the shared file system.

With Red Hat OpenStack Platform (RHOSP) director, you must create a default share type before you open the cloud for users to access.

Procedure

  • Create a default share type for CephFS with NFS:

    $ manila type-create default false

For more information about share types, see Creating share types in Configuring persistent storage.

13.4. Verifying creation of isolated StorageNFS network

The network_data_ganesha.yaml file used to deploy CephFS-NFS as a Shared File Systems service back end creates the StorageNFS VLAN. Complete the following steps to verify the existence of the isolated StorageNFS network.

Procedure

  1. Log in to one of the controllers in the overcloud.
  2. Enter the following command to check the connected networks and verify the existence of the VLAN as set in network_data_ganesha.yaml:

    $ ip a
    15: vlan310: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
        link/ether 32:80:cf:0e:11:ca brd ff:ff:ff:ff:ff:ff
        inet 172.16.4.4/24 brd 172.16.4.255 scope global vlan310
           valid_lft forever preferred_lft forever
        inet 172.16.4.7/32 brd 172.16.4.255 scope global vlan310
           valid_lft forever preferred_lft forever
        inet6 fe80::3080:cfff:fe0e:11ca/64 scope link
           valid_lft forever preferred_lft forever

13.5. Verifying Ceph MDS service

Use the systemctl status command to verify the Ceph MDS service status.

Procedure

  • Enter the following command on all Controller nodes to check the status of the MDS container:

    $ systemctl status ceph-mds<@CONTROLLER-HOST>

    Example:

$ systemctl status ceph-mds@controller-0.service

ceph-mds@controller-0.service - Ceph MDS
   Loaded: loaded (/etc/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-09-18 20:11:53 UTC; 6 days ago
 Main PID: 65066 (conmon)
   Tasks: 16 (limit: 204320)
   Memory: 38.2M
   CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@controller-0.service
         └─60921 /usr/bin/podman run --rm --net=host --memory=32000m --cpus=4 -v
/var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v
/var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro>

13.6. Verifying Ceph cluster status

Complete the following steps to verify Ceph cluster status.

Procedure

  1. Log in to the active Controller node.
  2. Enter the following command:

    $ sudo ceph -s
    
    cluster:
        id: 3369e280-7578-11e8-8ef3-801844eeec7c
        health: HEALTH_OK
    
      services:
        mon: 3 daemons, quorum overcloud-controller-1,overcloud-controller-2,overcloud-controller-0
        mgr: overcloud-controller-1(active), standbys: overcloud-controller-2, overcloud-controller-0
        mds: cephfs-1/1/1 up  {0=overcloud-controller-0=up:active}, 2 up:standby
        osd: 6 osds: 6 up, 6 in

    There is one active MDS and two MDSs on standby.

  3. To check the status of the Ceph file system in more detail, enter the following command and replace <cephfs> with the name of the Ceph file system:

    $ sudo ceph fs ls
    
    name: cephfs, metadata pool: manila_metadata, data pools: [manila_data]

13.7. Verifying NFS-Ganesha and manila-share service status

Complete the following step to verify the status of NFS-Ganesha and manila-share service.

Procedure

  1. Enter the following command from one of the Controller nodes to confirm that ceph-nfs and openstack-manila-share started:

    $ pcs status
    
    ceph-nfs       (systemd:ceph-nfs@pacemaker):   Started overcloud-controller-1
    
    podman container: openstack-manila-share [192.168.24.1:8787/rhosp-rhel8/openstack-manila-share:pcmklatest]
       openstack-manila-share-podman-0      (ocf::heartbeat:podman):        Started overcloud-controller-1

13.8. Verifying manila-api services acknowledges scheduler and share services

Complete the following steps to confirm that the manila-api service acknowledges the scheduler and share services.

Procedure

  1. Log in to the undercloud.
  2. Enter the following command:

    $ source /home/stack/overcloudrc
  3. Enter the following command to confirm manila-scheduler and manila-share are enabled:

    $ manila service-list
    
    | Id | Binary          | Host             | Zone | Status | State | Updated_at |
    
    | 2 | manila-scheduler | hostgroup        | nova | enabled | up | 2018-08-08T04:15:03.000000 |
    | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2018-08-08T04:15:03.000000 |
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.