CephFS via NFS Back End Guide for the Shared File System Service
Understanding, using, and managing the Shared File System Service with CephFS via NFS in OpenStack
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. OpenStack Shared File Systems service with CephFS via NFS
Red Hat OpenStack Platform (RHOSP) provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It is a scalable, fault-tolerant platform for the development of cloud-enabled workloads.
The Shared File Systems service (manila) with Ceph File System (CephFS) via NFS enables cloud administrators to use the same Ceph cluster used for block and object storage to provide file shares via the NFS protocol. See the Shared File System service chapter in the Storage Guide for additional information.
This guide discusses concepts and procedures related to installing, configuring, deploying, and testing CephFS via NFS.
The OpenStack Shared File Systems service (manila) with Ceph File System (CephFS) via NFS provides a fault-tolerant NFS share service for the Red Hat OpenStack Platform. See the Shared File System service chapter in the Storage Guide for additional information.
1.1. Introduction to CephFS via NFS
CephFS is the highly scalable, open-source distributed file system component of Ceph, a unified distributed storage platform. Ceph implements object, block, and file storage using Reliable Autonomic Distributed Object Store (RADOS). CephFS, which is POSIX compatible, provides file access to a Ceph storage cluster.
The Shared File System service enables users to create shares in CephFS and access them using NFS 4.1 via NFS-Ganesha. NFS-Ganesha controls access to the shares and exports them to clients via the NFS 4.1 protocol. The Shared File System service manages the life cycle of these shares from within OpenStack. When cloud administrators set up the service to use CephFS via NFS, these file shares come from the CephFS cluster, but are created and accessed as familiar NFS shares.
1.2. Benefits of using Shared File System service with CephFS via NFS
The Shared File System service (manila) with CephFS via NFS enables cloud administrators to use the same Ceph cluster they use for block and object storage to provide file shares through the familiar NFS protocol, which is available by default on most operating systems. CephFS maximizes Ceph clusters that are already used as storage back ends for other services in the OpenStack cloud, such as Block Storage (cinder), object storage, and so forth.
Adding CephFS to an externally deployed Ceph cluster that was not configured by Red Hat OpenStack director is not supported at this time. Currently, only one CephFS back end can be defined in director.
Red Hat OpenStack Platform 13 fully supports the CephFS NFS driver (NFS-Ganesha), unlike the CephFS native driver, which is a Technology Preview feature.
Red Hat CephFS native driver is available only as a Technology Preview, and therefore is not fully supported by Red Hat.
For more information about Technology Preview features, see Scope of Coverage Details.
In CephFS via NFS deployments, the Ceph storage back end is separated from the user’s network, which makes the underlying Ceph storage less vulnerable to malicious attacks and inadvertent mistakes.
Separate networks used for data-plane traffic and the API networks used to communicate with control plane services, such as Shared File System services, make file storage more secure.
The Ceph client is under administrative control. The end user controls an NFS client (an isolated user VM, for example) that has no direct access to the Ceph cluster storage back end.
1.3. Ceph File System architecture
Ceph File System (CephFS) is a distributed file system that can be used with either NFS-Ganesha using the NFS v4 protocol (supported) or CephFS native driver (technology preview).
1.3.1. CephFS with native driver
The CephFS native driver combines the OpenStack Shared File System service (manila) and Red Hat Ceph Storage. When deployed via director, the controller nodes host the Ceph daemons, such as the manager, metadata servers (MDS), and monitors (MON) as well as the Shared File System services.
Compute nodes may host one or more tenants. Tenants, represented by the white boxes, which contain user-managed VMs (gray boxes with two NICs), access the ceph and manila daemons by connecting to them over the public Ceph storage network. This network also allows access to the data on the storage nodes provided by the Ceph Object Storage Daemons (OSDs). Instances (VMs) hosted on the tenant boot with two NICs: one dedicated to the storage provider network and the second to tenant-owned routers to the external provider network.
The storage provider network connects the VMs running on the tenants to the public Ceph storage network. The Ceph public network provides back end access to the Ceph object storage nodes, metadata servers (MDS), and controller nodes. Using the native driver, CephFS relies on cooperation with the clients and servers to enforce quotas, guarantee tenant isolation, and for security. CephFS with the native driver works well in an environment with trusted end users on a private cloud. This configuration requires software that is running under user control to cooperate and work properly.
1.3.2. CephFS via NFS
The CephFS via NFS back end in the OpenStack Shared File Systems service (manila) is composed of Ceph metadata servers (MDS), the CephFS via NFS gateway (NFS-Ganesha), and the Ceph cluster service components. The Shared File System service’s CephFS NFS driver uses NFS-Ganesha gateway to provide NFSv4 protocol access to CephFS shares. The Ceph MDS service maps the directories and file names of the file system to objects stored in RADOS clusters. NFS gateways can serve NFS file shares with different storage back ends, such as Ceph. The NFS-Ganesha service runs on the controller nodes along with the Ceph services.
Instances are booted with at least two NICs: one connects to the tenant router, and the second NIC connects to the StorageNFS network, which connects directly to the NFS-Ganesha gateway. The instance mounts shares using the NFS protocol. CephFS shares hosted on Ceph OSD nodes are provided through the NFS gateway.
NFS-Ganesha improves security by preventing user instances from directly accessing the MDS and other Ceph services. Instances do not have direct access to the Ceph daemons.
1.3.2.1. Ceph services and client access
In addition to the monitor, OSD, Rados Gateway (RGW), and manager services deployed when Ceph provides object and/or block storage, a Ceph metadata service (MDS) is required for CephFS and an NFS-Ganesha service is required as a gateway to native CephFS using the NFS protocol. (For user-facing object storage, an RGW service is also deployed). The gateway runs the CephFS client to access the Ceph public network and is under administrative rather than end-user control.
NFS-Ganesha runs in its own docker container that interfaces both to the Ceph public network and to a new isolated network, StorageNFS. OpenStack director’s composable network feature is used to deploy this network and connect it to the controller nodes. The cloud administrator then configures the network as a neutron provider network.
NFS-Ganesha accesses CephFS over the Ceph public network and binds its NFS service using an address on the StorageNFS network.
To access NFS shares, user VMs (nova instances) are provisioned with an additional NIC that connects to the Storage NFS network. Export locations for CephFS shares appear as standard NFS IP:<path>
tuples using the NFS-Ganesha server’s VIP on the StorageNFS network. Access control for user VMs is done using the user VM’s IP on that network.
Neutron security groups prevent the user VM belonging to tenant 1 from accessing a user VM belonging to tenant 2 over the StorageNFS network. Tenants share the same CephFS filesystem but tenant data path separation is enforced because user VMs can only access files under export trees: /path/to/share1/….
, /path/to/share2/….
1.3.2.2. Shared File System service with CephFS via NFS fault tolerance
When OpenStack director starts the Ceph service daemons, they manage their own high availability (HA) state and, in general, there are multiple instances of these daemons running. By contrast, in this release, only one instance of NFS-Ganesha can serve file shares at a time.
To avoid a single point of failure in the data path for CephFS via NFS shares, NFS-Ganesha runs on an OpenStack controller node in an active-passive configuration managed by a Pacemaker-Corosync cluster. NFS-Ganesha acts across the controller nodes as a virtual service with a virtual service IP address.
If a controller fails (or the service on a particular controller node fails and cannot be recovered on that node) Pacemaker-Corosync starts a new NFS-Ganesha instance on a different controller using the same virtual IP. Existing client mounts are preserved because they use the virtual IP for the export location of shares.
Using default NFS mount-option settings and NFS 4.1 or greater, after a failure, TCP connections are reset and clients reconnect. I/O operations temporarily stop responding during failover, but they will not fail. Application I/O also stops responding, but resumes after failover completes.
New connections, new lock-state, and so forth are refused until after a grace period of up to 90 seconds during which the server waits for clients to reclaim their locks. NFS-Ganesha keeps a list of the clients and will exit the grace period earlier, if it sees that all clients reclaimed their locks.
The default value of the grace period is 90 seconds. This value is tunable through the NFSv4 Grace_Period
configuration option.
Chapter 2. CephFS via NFS Installation
2.1. CephFS with NFS-Ganesha deployment
A typical Ceph file system (CephFS) via NFS installation in an OpenStack environment includes:
- OpenStack controller nodes running containerized Ceph metadata server (MDS), Ceph monitor (MON), manila, and NFS-Ganesha services. Some of these services may coexist on the same node or may have one or more dedicated nodes.
- Ceph storage cluster with containerized object storage daemons (OSDs) running on Ceph storage nodes.
- An isolated StorageNFS network that provides access from tenants to the NFS-Ganesha services for NFS share provisioning.
The Shared File System (manila) service provides APIs that allow the tenants to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS (namely, manila.share.drivers.cephfs.driver.CephFSDriver
) allows the Shared File System service to use CephFS as a back end. The Red Hat OpenStack Platform director configures the driver to deploy the NFS-Ganesha gateway so that the CephFS shares are presented via the NFS 4.1 protocol. In this document, this configuration is referred to as CephFS via NFS.
Using OpenStack director to deploy the Shared File System with a CephFS back end on the overcloud automatically creates the required storage network (defined in the heat template). For more information about network planning, refer to the Planning Networks section of the Director Installation and Usage Guide.
While you can manually configure the Shared File System service by editing its node’s /etc/manila/manila.conf
file, any settings can be overwritten by the Red Hat OpenStack Platform director in future overcloud updates. The recommended method for configuring a Shared File System back end is through the director.
This section describes how to install CephFS via NFS in an integrated deployment managed by director.
Adding CephFS to an externally deployed Ceph cluster that was not configured by Red Hat OpenStack director is not supported at this time. Currently, only one CephFS back end can be defined in director at a time.
2.1.1. Requirements
To use CephFS via NFS, you need a Red Hat OpenStack Platform version 13 or newer environment, which can be an existing or new OpenStack environment. CephFS works with Red Hat Ceph Storage version 3. See the Deploying an Overcloud with Containerized Red Hat Ceph Guide for instructions on how to deploy such an environment.
This document assumes that:
- The Shared File System service will be installed on controller nodes, as is the default behavior.
- The NFS-Ganesha gateway service will be installed on the controller’s nodes Pacemaker cluster.
- Only a single instance of a CephFS back end will be used by the Shared File System Service. Other non-CephFS back ends can be used with the single CephFS back end.
- An extra network (StorageNFS) created by OpenStack Platform director used for the storage traffic.
- New Red Hat Ceph Storage version 3 cluster configured at the same time as CephFS via NFS.
2.1.3. Isolated network used by CephFS via NFS
CephFS via NFS deployments use an extra isolated network, StorageNFS. This network is deployed so users can mount shares over NFS on that network without accessing the Storage or Storage Management networks which are reserved for infrastructure traffic.
For more information about isolating networks, see https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/advanced_overcloud_customization/basic-network-isolation in the Director Installation and Usage guide.
2.2. Installing OpenStack with CephFS via NFS and a custom network_data file
Installing CephFS via NFS involves:
- Installing the ceph-ansible package.
-
Preparing the overcloud container images with the
openstack overcloud image prepare
command. -
Generating the custom roles file (
roles_data.yaml
) andnetwork_data.yaml
file. -
Deploying Ceph, Shared File System service (manila), and CephFS using the
openstack overcloud deploy
command with custom roles and environments. - Configuring the isolated StorageNFS network and creating the default share type.
Examples use the standard stack
user in the OpenStack environment.
Tasks should be performed in conjunction with an OpenStack installation or environment update.
2.2.1. Installing the ceph-ansible package
The OpenStack director requires the ceph-ansible
package to be installed on an undercloud node to deploy containerized Ceph.
Procedure
- Log in to an undercloud node.
Install the ceph-ansible package using
yum install
with elevated privileges.[stack@undercloud-0 ~]$ sudo yum install -y ceph-ansible [stack@undercloud-0 ~]$ sudo yum list ceph-ansible ... Installed Packages ceph-ansible.noarch 3.1.0-0.1.el7 rhelosp-13.
2.2.2. Preparing overcloud container images
Because all services are containerized in OpenStack, Docker images have to be prepared for the overcloud using the openstack overcloud image prepare
command. Running this command with the additional options add default images for the ceph and manila services to the docker registry. Ceph MDS and NFS-Ganesha services use the same Ceph base container image.
For additional information on container images, refer to the Container Images for Additional Services section in the Director Installation and Usage Guide.
Procedure
From the undercloud, run the
openstack overcloud image prepare
command with-e
to include these environment files:$ openstack overcloud container image prepare \ ... -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/manila.yaml \ ...
Use grep to verify the default images for the ceph and manila services are available in the
containers-default-parameters.yaml
file.[stack@undercloud-0 ~]$ grep -E 'ceph|manila' composable_roles/docker-images.yaml DockerCephDaemonImage: 192.168.24.1:8787/rhceph:3-12 DockerManilaApiImage: 192.168.24.1:8787/rhosp13/openstack-manila-api:2018-08-22.2 DockerManilaConfigImage: 192.168.24.1:8787/rhosp13/openstack-manila-api:2018-08-22.2 DockerManilaSchedulerImage: 192.168.24.1:8787/rhosp13/openstack-manila-scheduler:2018-08-22.2 DockerManilaShareImage: 192.168.24.1:8787/rhosp13/openstack-manila-share:2018-08-22.2
2.2.2.1. Generating the custom roles file
The ControllerStorageNFS custom role is used to set up the isolated StorageNFS network. This role is similar to the default Controller.yaml
role file with the addition of the StorageNFS network and the CephNfs service (indicated by OS::TripleO::Services:CephNfs
).
[stack@undercloud ~]$ cd /usr/share/openstack-tripleo-heat-templates/roles [stack@undercloud roles]$ diff Controller.yaml ControllerStorageNfs.yaml 16a17 > - StorageNFS 50a45 > - OS::TripleO::Services::CephNfs
For information about the openstack overcloud roles generate
command, refer to the Roles section of the Advanced Overcloud Customization Guide.
Procedure
The openstack overcloud roles generate
command creates a custom roles_data.yaml
file including the services specified after -o
. In the example below, the roles_data.yaml
file created has the services for ControllerStorageNfs, Compute, and CephStorage.
If you have an existing roles_data.yaml
file, modify it to add ControllerStorageNfs, Compute, and CephStorage services to the configuration file. Refer to the Roles section of the Advanced Overcloud Customization Guide.
- Log in to an undercloud node.
Use the
openstack overcloud roles generate
command to create theroles_data.yaml
file:[stack@undercloud ~]$ openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage
2.2.3. Deploying the updated environment
When you are ready to deploy your environment, use the openstack overcloud deploy
command with the custom environments and roles required to run CephFS with NFS-Ganesha. These environments and roles are explained below.
Your overcloud deploy command will have the options below in addition to other required options.
Action | Option | Additional Information |
---|---|---|
Add the updated default containers from the |
| |
Add the extra StorageNFS network with |
| Section 2.2.3.1, “StorageNFS and network_data_ganesha.yaml file” |
Add the custom roles defined in |
| |
Deploy the Ceph daemons with |
| Initiating Overcloud Deployment in Deploying an Overcloud with Containzerized Red Hat Ceph |
Deploy the Ceph metadata server with |
| Initiating Overcloud Deployment in Deploying an Overcloud with Containzerized Red Hat Ceph |
Deploy the manila service with the CephFS via NFS back end. Configures NFS-Ganesha via director. |
|
The example below shows an openstack overcloud deploy command
incorporating options to deploy CephFS via NFS-Ganesha, Ceph cluster, Ceph MDS, and the isolated StorageNFS network:
[stack@undercloud ~]$ openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml \ -r /home/stack/roles_data.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \ -e /home/stack/network-environment.yaml \ -e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
For additional information on the openstack overcloud deploy command, refer to Creating the Overcloud with the CLI Tools section in the Director Installation and Usage Guide.
2.2.3.1. StorageNFS and network_data_ganesha.yaml file
Composable networks let you define custom networks and assign them to any role. Instead of using the standard network_data.yaml
file, the StorageNFS composable network is configured using the network_data_ganesha.yaml
file. Both of these roles are available in the /usr/share/openstack-tripleo-heat-templates
directory.
The network_data_ganesha.yaml
file contains an additional section that defines the isolated StorageNFS network. While the default settings will work for most installations, you will still need to edit the YAML file to add your network settings, including the VLAN ID, subnet, etc.
name: StorageNFS enabled: true vip: true ame_lower: storage_nfs vlan: 70 ip_subnet: '172.16.4.0/24' allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.250'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]
For more information on composable networks, refer to the Using Composable Networks section in the Advanced Overcloud Customization Guide.
2.2.3.2. manila-cephfsganesha-config.yaml
The integrated environment file for defining a CephFS back end is located in the following path of an undercloud node:
/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
The manila-cephfsganesha-config.yaml
environment file contains settings relevant to the deployment of the Shared File System service. The back end default settings should work for most environments. The example shows the default values used by the director when deploying the Shared File System service:
[stack@undercloud ~]$ cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml # A Heat environment file which can be used to enable a # a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../docker/services/manila-api.yaml OS::TripleO::Services::ManilaScheduler: ../docker/services/manila-scheduler.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../docker/services/pacemaker/manila-share.yaml OS::TripleO::Services::ManilaBackendCephFs: ../puppet/services/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by ceph-ansible # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../docker/services/ceph-ansible/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: false 4 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'
The parameter_defaults header signifies the start of the configuration. Specifically, settings under this header let you override default values set in resource_registry. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs, which sets defaults for a CephFS back end.
- 1
ManilaCephFSBackendName
sets the name of the manila configuration of your CephFS backend. In this case, the default back end name is cephfs.- 2
ManilaCephFSDriverHandlesShareServers
controls the lifecycle of the share server. When set to false, the driver will not handle the lifecycle. This is the only supported option.- 3
ManilaCephFSCephFSAuthId
defines the Ceph auth ID that the director creates for the manila service to access the Ceph cluster.- 4
ManilaCephFSCephFSEnableSnapshots
controls snapshot activation. The false value indicates that snapshots are not enabled. This feature is currently not supported.
For more information about environment files, see Environment Files in the Advanced Overcloud Customization guide.
2.2.4. Completing post-deployment configuration
Two post-deployment configuration items need to be completed prior to allowing users access:
- The neutron StorageNFS network must be mapped to the isolated data center Storage NFS network, and
- The default share type must be created.
Once these steps are completed, the tenant compute instances can create, allow access to, and mount NFS shares.
2.2.4.1. Configuring the isolated network
The new isolated StorageNFS network must be mapped to a neutron-shared provider network. The Compute VMs will attach to this neutron network to access share export locations provided by the NFS-Ganesha gateway.
For general information about network security with the Shared File System service, refer to the section Hardening the Shared File System Service in the Security and Hardening Guide.
Procedure
The openstack network create command defines the configuration for the StorageNFS neutron network. Run this command with the following options:
- For --provider-physical-network, use the default value datacentre, unless you have set another tag for the br-isolated bridge via NeutronBridgeMappings in your tripleo-heat-templates.
- For the value of --provider-segment, use the vlan value set for the StorageNFS isolated network in the Heat template, /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml. This value is 70 unless the deployer has modified the isolated network definitions.
- For --provider-network-type, use the value vlan.
To use this command:
From an undercloud node:
[stack@undercloud ~]$ source ~/overcloudrc
On an undercloud node, run the openstack network create command to create the StorageNFS network:
[stack@undercloud-0 ~]$ openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70 +---------------------------+--------------------------------------+ | Field | Value +---------------------------+--------------------------------------+ | admin_state_up | UP | availability_zone_hints | | availability_zones | | created_at | 2018-09-17T21:12:49Z | description | | dns_domain | None | id | cd272981-0a5e-4f3d-83eb-d25473f5176e | ipv4_address_scope | None | ipv6_address_scope | None | is_default | False | is_vlan_transparent | None | mtu | 1500 | name | StorageNFS | port_security_enabled | True | project_id | 3ca3408d545143629cd0ec35d34aea9c | provider-network-type | vlan | provider-physical-network | datacentre | provider-segment | 70 | qos_policy_id | None | revision_number | 3 | router:external | Internal | segments | None | shared | True | status | ACTIVE | subnets | | tags | | updated_at | 2018-09-17T21:12:49Z +---------------------------+--------------------------------------+
2.2.4.2. Set up the shared provider StorageNFS network
Create a corresponding StorageNFSSubnet on the neutron shared provider network. Ensure that the subnet is the same as for the storage_nfs_subnet in the undercloud but make sure that the allocation range for this subnet and that of the corresponding undercloud subnet do not overlap. No gateway is required since this subnet is dedicated to serving NFS shares.
Requirements
- The start and ending IP range for the allocation pool
- The subnet IP range
Procedure
- Log in to an overcloud node.
Use the sample command to provision the network, updating values where needed.
-
Replace the
start=172.16.4.150,end=172.16.4.250
IP values with the ones for your network. -
Replace the
172.16.4.0/24
subnet range with the correct ones for your network.
-
Replace the
[stack@undercloud-0 ~]$ openstack subnet create --allocation-pool start=172.16.4.150,end=172.16.4.250 --dhcp --network StorageNFS --subnet-range 172.16.4.0/24 --gateway none StorageNFSSubnet +-------------------+--------------------------------------+ | Field | Value +-------------------+--------------------------------------+ | allocation_pools | 172.16.4.150-172.16.4.250 | cidr | 172.16.4.0/24 | created_at | 2018-09-17T21:22:14Z | description | | dns_nameservers | | enable_dhcp | True | gateway_ip | None | host_routes | | id | 8c696d06-76b7-4d77-a375-fd2e71e3e480 | ip_version | 4 | ipv6_address_mode | None | ipv6_ra_mode | None | name | StorageNFSSubnet | network_id | cd272981-0a5e-4f3d-83eb-d25473f5176e | project_id | 3ca3408d545143629cd0ec35d34aea9c | revision_number | 0 | segment_id | None | service_types | | subnetpool_id | None | tags | | updated_at | 2018-09-17T21:22:14Z +-------------------+--------------------------------------+
Chapter 3. Verifying successful CephFS via NFS deployment
Deploying CephFS via NFS as a back end of OpenStack Shared File System service (manila) adds new elements to the overcloud environment.
The new overcloud elements are:
- StorageNFS network
- Ceph MDS service on the controllers
- NFS-Ganesha service on the controllers
See the Shared File System service chapter in the Storage Guide for additional information about using the Shared File System service with CephFS via NFS.
The cloud administrator must verify the stability of the CephFS via NFS environment before making it available to service users.
Prerequisites
- Completing the steps in Chapter 2, CephFS via NFS Installation
3.1. Verifying creation of isolated StorageNFS network
The network_data_ganesha.yaml
file used to deploy CephFS via NFS as a Shared File Services system back end creates the StorageNFS VLAN:
- name: StorageNFS enabled: true vip: true name_lower: storage_nfs vlan: 310 ip_subnet: '172.16.4.0/24' allocation_pools: [{'start': '172.16.4.4', 'end': '172.16.4.250'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' IPv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]
Complete the following steps to verify the existence of the isolated StorageNFS network.
Procedure
- Log in to one of the controllers in the overcloud.
Run the following command to check the connected networks and verify the existence of the VLAN as set in
network_data_ganesha.yaml
:$ ip a 15: vlan310: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 32:80:cf:0e:11:ca brd ff:ff:ff:ff:ff:ff inet 172.16.4.4/24 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet 172.16.4.7/32 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet6 fe80::3080:cfff:fe0e:11ca/64 scope link valid_lft forever preferred_lft forever
3.2. Verifying Ceph MDS service
Use the systemctl status
command to verify the Ceph MDS service status.
Procedure
Run the following command on all controllers to check the status of the MDS docker container:
$ systemctl status ceph-mds@<CONTROLLER-HOST>
Example:
ceph-mds@controller-0.service - Ceph MDS Loaded: loaded (/etc/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-09-18 20:11:53 UTC; 6 days ago Main PID: 65066 (docker-current) CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@controller-0.service └─65066 /usr/bin/docker-current run --rm --net=host --memory=4g --cpu-quota=100000 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/...
3.3. Verifying Ceph cluster status
Complete the following steps to verify Ceph cluster status.
Procedure
- Log in to the active controller.
Run the following command:
$ sudo ceph -s cluster: id: 3369e280-7578-11e8-8ef3-801844eeec7c health: HEALTH_OK services: mon: 3 daemons, quorum overcloud-controller-1,overcloud-controller-2,overcloud-controller-0 mgr: overcloud-controller-1(active), standbys: overcloud-controller-2, overcloud-controller-0 mds: cephfs-1/1/1 up {0=overcloud-controller-0=up:active}, 2 up:standby osd: 6 osds: 6 up, 6 in
NoteNotice there is one active MDS and two MDSs on standby.
To check the status of the Ceph file system in more detail, run the following command where
<cephfs>
is the name of the Ceph file system:$ sudo ceph fs ls name: cephfs, metadata pool: manila_metadata, data pools: [manila_data]
3.5. Verifying manila-api services acknowledges scheduler and share services
Complete the following steps to confirm the manila-api
service acknowledges the scheduler and share services.
Procedure
- Log in to the undercloud.
Run the following command:
$ source /home/stack/overcloudrc
Run the following command to confirm
manila-scheduler
andmanila-share
are enabled:$ manila service-list | Id | Binary | Host | Zone | Status | State | Updated_at | | 2 | manila-scheduler | hostgroup | nova | enabled | up | 2018-08-08T04:15:03.000000 | | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2018-08-08T04:15:03.000000 |