Chapter 7. Deploying the Shared File Systems service with CephFS-NFS
When you use the Shared File Systems service (manila) with Ceph File System (CephFS) through an NFS gateway (NFS-Ganesha), you can use the same Red Hat Ceph Storage cluster that you use for block and object storage to provide file shares through the NFS protocol.
CephFS-NFS has been fully supported since Red Hat OpenStack Platform (RHOSP) version 13. The RHOSP Shared File Systems service (manila) with CephFS-NFS for RHOSP 17.0 and later is supported for use with Red Hat Ceph Storage version 5.2 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.
CephFS is the highly scalable, open-source distributed file system component of Red Hat Ceph Storage, a unified distributed storage platform. Ceph Storage implements object, block, and file storage using Reliable Autonomic Distributed Object Store (RADOS). CephFS, which is POSIX compatible, provides file access to a Ceph Storage cluster.
The Shared File Systems service enables users to create shares in CephFS and access them with NFS 4.1 through user-space NFS server software, NFS-Ganesha. NFS-Ganesha controls access to the shares and exports them to clients through the NFS 4.1 protocol. The Shared File Systems service manages the life cycle of these shares in RHOSP. When cloud administrators configure the service to use CephFS-NFS, these file shares come from the CephFS cluster, but they are created and accessed as familiar NFS shares.
For more information about the Shared File Systems service, see Configuring the Shared File Systems service (manila) in Configuring persistent storage.
7.1. Prerequisites
- You install the Shared File Systems service on Controller nodes, as is the default behavior.
- You must create a StorageNFS network for storage traffic through RHOSP director.
- You install the NFS-Ganesha gateway service on the Pacemaker cluster of the Controller nodes.
- You configure only a single instance of a CephFS back end to use the Shared File Systems service. You can use other non-CephFS back ends with the single CephFS back end.
7.2. CephFS-NFS driver
The CephFS-NFS back end in the Shared File Systems service (manila) is composed of Ceph metadata servers (MDS), the NFS gateway (NFS-Ganesha), and the Red Hat Ceph Storage cluster service components.
The Shared File Systems service CephFS-NFS driver uses NFS-Ganesha to provide NFSv4 protocol access to CephFS shares. The Ceph MDS service maps the directories and file names of the file system to objects that are stored in RADOS clusters. NFS gateways can serve NFS file shares with different storage back ends, such as Ceph. The NFS-Ganesha service runs on the Controller nodes with the Ceph services.
Deployment with an isolated network is optional but recommended. In this scenario, instances are booted with at least two NICs: one NIC connects to the project router and the second NIC connects to the StorageNFS network, which connects directly to NFS-Ganesha. The instance mounts shares by using the NFS protocol. CephFS shares that are hosted on Ceph Object Storage Daemon (OSD) nodes are provided through the NFS gateway.
NFS-Ganesha improves security by preventing user instances from directly accessing the MDS and other Ceph services. Instances do not have direct access to the Ceph daemons.
7.3. Red Hat Ceph Storage services and client access
When you use Red Hat Ceph Storage to provide object and block storage, you require the following services for deployment:
- Ceph monitor (MON)
- Object Storage Daemon (OSD)
- Rados Gateway (RGW)
- Manager
For native CephFS, you also require the Ceph Storage Metadata Service (MDS), and for CephFS-NFS, you require the NFS-Ganesha service as a gateway to native CephFS using the NFS protocol.
NFS-Ganesha runs in its own container that interfaces both to the Ceph public network and to a new isolated network, StorageNFS. If you use the composable network feature of Red Hat OpenStack Platform (RHOSP) director, you can deploy the isolated network and connect it to the Controller nodes. As the cloud administrator, you can configure the network as a Networking (neutron) provider network.
NFS-Ganesha accesses CephFS over the Ceph public network and binds its NFS service using an address on the StorageNFS network.
To access NFS shares, you provision Compute (nova) instances with an additional NIC that connects to the Storage NFS network. Export locations for CephFS shares appear as standard NFS IP:<path>
tuples that use the NFS-Ganesha server VIP on the StorageNFS network. The network uses the IP address of the instance to perform access control on the NFS shares.
Networking (neutron) security groups prevent an instance that belongs to project 1 from accessing an instance that belongs to project 2 over the StorageNFS network. Projects share the same CephFS file system, but project data path separation is enforced because instances can access files only under export trees: /path/to/share1/…
, /path/to/share2/…
.
7.4. Shared File Systems service with CephFS-NFS fault tolerance
When Red Hat OpenStack Platform (RHOSP) director starts the Red Hat Ceph Storage service daemons, they manage their own high availability (HA) state and, in general, there are multiple instances of these daemons running. By contrast, in this release, only one instance of NFS-Ganesha can serve file shares at a time.
To avoid a single point of failure in the data path for CephFS-NFS shares, NFS-Ganesha runs on a RHOSP Controller node in an active-passive configuration that is managed by a Pacemaker-Corosync cluster. NFS-Ganesha acts across the Controller nodes as a virtual service with a virtual service IP address.
If a Controller node fails or the service on a particular Controller node fails and cannot be recovered on that node, Pacemaker-Corosync starts a new NFS-Ganesha instance on a different Controller node using the same virtual IP address. Existing client mounts are preserved because they use the virtual IP address for the export location of shares.
Using default NFS mount-option settings and NFS 4.1 or later, after a failure, TCP connections are reset and clients reconnect. I/O operations temporarily stop responding during failover, but they do not fail. Application I/O also stops responding but resumes after failover completes.
New connections, new lock states, and so on are refused until after a grace period of up to 90 seconds during which time the server waits for clients to reclaim their locks. NFS-Ganesha keeps a list of the clients and exits the grace period earlier if all clients reclaim their locks.
7.5. CephFS-NFS installation
A typical CephFS-NFS installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following configurations:
OpenStack Controller nodes that are running the following:
- Ceph monitor (MON)
- Containerized Ceph metadata server (MDS)
- Shared File Systems service (manila)
NFS-Ganesha
Some of these services can coexist on the same node or can have one or more dedicated nodes.
- A Red Hat Ceph Storage cluster with containerized object storage daemons (OSDs) running on Ceph Storage nodes
- An isolated StorageNFS network that provides access from projects to the NFS-Ganesha service for NFS share provisioning
The Shared File Systems service with CephFS-NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For important recommendations, see https://access.redhat.com/articles/6667651.
The Shared File Systems service provides APIs that allow the projects to request file system shares, which are fulfilled by driver modules. If you use the driver for CephFS, manila.share.drivers.cephfs.driver.CephFSDriver
, you can use the Shared File Systems service with a CephFS back end. RHOSP director configures the driver to deploy NFS-Ganesha so that the CephFS shares are presented through the NFS 4.1 protocol.
While preparing your CephFS NFS deployment, you will require the isolated StorageNFS
network. You can use director to create this isolated StorageNFS
network. For more information, see Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director.
Manual configuration options for Shared File Systems service back ends
You can manually configure the Shared File Systems service by editing the node file /etc/manila/manila.conf
. However, RHOSP director can override any settings in future overcloud updates.
You can add CephFS-NFS to an externally deployed Ceph Storage cluster, which was not configured by director. Currently, you can only define one CephFS back end in director. For more information, see Integrating an overcloud with Ceph Storage in Integrating the overcloud with an existing Red Hat Ceph Storage Cluster.
7.6. File shares
The Shared File Systems service (manila), Ceph File System (CephFS), and CephFS-NFS manage shares differently.
The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect.
CephFS manages a share like a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size of the share that the Shared File Systems service creates.
You control access to CephFS-NFS shares by specifying the IP address of the client. With CephFS-NFS, file shares are provisioned and accessed through the NFS protocol. The NFS protocol also manages security.
7.7. Network isolation for CephFS-NFS
For security, isolate NFS traffic to a separate network when using CephFS-NFS so that the NFS server is accessible only through the isolated network. Deployers can restrict the isolated network to a select group of projects in the cloud. Red Hat OpenStack (RHOSP) director ships with support to deploy a dedicated StorageNFS network.
Before you deploy the overcloud to enable CephFS-NFS for use with the Shared File Systems service, you must create the following:
-
An isolated network for NFS traffic, called
StorageNFS
- A Virtual IP (VIP) on the isolated network
- A custom role for the Controller nodes that configures the nodes with the StorageNFS network
For more information about creating the isolated network, the VIP, and the custom role, see Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director.
It is possible to omit the creation of an isolated network for NFS traffic. However, if you omit the StorageNFS network in a production deployment that has untrusted clients, director can connect the Ceph NFS server on any shared, non-isolated network, such as an external network. Shared networks are usually routable to all user private networks in the cloud. When the NFS server is accessed through a routed network in this manner, you cannot control access to Shared File Systems service shares by applying client IP access rules. Users must allow access to their shares by using the generic 0.0.0.0/0
IP. Because of the generic IP, anyone who discovers the export path can mount the shares.
7.8. Deploying the CephFS-NFS environment
When you are ready to deploy your environment, use the openstack overcloud deploy
command with the custom environments and roles required to run CephFS with NFS-Ganesha.
The overcloud deploy command has the following options in addition to other required options.
Action | Option | Additional information |
---|---|---|
Reference the deployed networks including the StorageNFS network |
| Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director. You can omit the StorageNFS network option if you do not want to isolate NFS traffic to a separate network. |
Reference the Virtual IPs created on the deployed networks, including the VIP for the StorageNFS network |
| Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director. You can omit this option if you do not want to isolate NFS traffic to a separate network. |
Add the custom roles defined in the |
| You can omit this option if you do not want to isolate NFS traffic to a separate network. |
Deploy the Ceph daemons. |
| Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director |
Deploy the Ceph metadata server with |
| Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director |
Deploy the Shared File Systems service (manila) with the CephFS-NFS back end. Configure NFS-Ganesha with director. |
|
The following example shows an openstack overcloud deploy
command with options to deploy CephFS with NFS-Ganesha, a Ceph Storage cluster, and Ceph MDS:
[stack@undercloud ~]$ openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ -r /home/stack/roles_data.yaml \ -e /home/stack/templates/overcloud-networks-deployed.yaml\ -e /home/stack/templates/overcloud-vip-deployed.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
For more information about the openstack overcloud deploy
command, see Provisioning and deploying your overcloud in Installing and managing Red Hat OpenStack Platform with director.
7.9. CephFS-NFS back-end environment file
The environment file for defining a CephFS-NFS back end, manila-cephfsganesha-config.yaml
, is located in the following path of an undercloud node: /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
.
The manila-cephfsganesha-config.yaml
environment file contains settings relevant to the deployment of the Shared File Systems service (manila). The back-end default settings work for most environments. The following example shows the default values that director uses during deployment of the Shared File Systems service:
[stack@undercloud ~]$ cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml # A Heat environment file which can be used to enable a # a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by Director # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../deployment/cephadm/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'
The parameter_defaults
header signifies the start of the configuration. To override default values set in resource_registry
, copy this manila-cephfsganesha-config.yaml
environment file to your local environment file directory, /home/stack/templates/
, and edit the parameter settings as required by your environment. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs
, which sets defaults for a CephFS back end.
- 1
ManilaCephFSBackendName
sets the name of the manila configuration of your CephFS back end. In this case, the default back-end name iscephfs
.- 2
ManilaCephFSDriverHandlesShareServers
controls the lifecycle of the share server. When set tofalse
, the driver does not handle the lifecycle. This is the only supported option.- 3
ManilaCephFSCephFSAuthId
defines the Ceph auth ID that director creates for themanila
service to access the Ceph cluster.
For more information about environment files, see Environment files in Installing and managing Red Hat OpenStack Platform with director.