Deploying the Shared File Systems service with CephFS through NFS
Understanding, using, and managing the Shared File Systems service with CephFS through NFS in Red Hat OpenStack Platform
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your input on our documentation. Tell us how we can make it better.
Using the Direct Documentation Feedback (DDF) function
Use the Add Feedback DDF function for direct comments on specific sentences, paragraphs, or code blocks.
- View the documentation in the Multi-page HTML format.
- Ensure that you see the Feedback button in the upper right corner of the document.
- Highlight the part of text that you want to comment on.
- Click Add Feedback.
- Complete the Add Feedback field with your comments.
- Optional: Add your email address so that the documentation team can contact you for clarification on your issue.
- Click Submit.
Chapter 1. The Shared File Systems service with CephFS through NFS
With the Shared File Systems service (manila) with Ceph File System (CephFS) through NFS, you can use the same Ceph cluster that you use for block and object storage to provide file shares through the NFS protocol. For more information, Shared File Systems service in the Storage Guide.
For the complete suite of documentation for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Documentation.
The Red Hat OpenStack Platform (RHOSP) Shared File Systems service with CephFS through NFS for RHOSP 16.0 and later is supported for use with Red Hat Ceph Storage version 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.
CephFS is the highly scalable, open-source distributed file system component of Ceph, a unified distributed storage platform. Ceph implements object, block, and file storage using Reliable Autonomic Distributed Object Store (RADOS). CephFS, which is POSIX compatible, provides file access to a Ceph storage cluster.
You can use the Shared File Systems service to create shares in CephFS and access them with NFS 4.1 through NFS-Ganesha. NFS-Ganesha controls access to the shares and exports them to clients through the NFS 4.1 protocol. The Shared File Systems service manages the life cycle of these shares from within Red Hat OpenStack Platform (RHOSP). When cloud administrators configure the service to use CephFS through NFS, these file shares come from the CephFS cluster, but are created and accessed as familiar NFS shares.
For more information, see Shared File Systems service in the Storage Guide.
1.1. Ceph File System architecture
Ceph File System (CephFS) is a distributed file system that you can use with either NFS-Ganesha using the NFS v4 protocol (supported) or CephFS native driver.
1.1.1. CephFS with native driver
The CephFS native driver combines the OpenStack Shared File Systems service (manila) and Red Hat Ceph Storage. When you use Red Hat OpenStack (RHOSP) director, the Controller nodes host the Ceph daemons, such as the manager, metadata servers (MDS), and monitors (MON) and the Shared File Systems services.
Compute nodes can host one or more projects. Projects(formerly known as tenants), which are represented in the following graphic by the white boxes, contain user-managed VMs, which are represented by gray boxes with two NICs. To access the ceph and manila daemons projects connect to the daemons over the public Ceph storage network. On this network, you can access data on the storage nodes provided by the Ceph Object Storage Daemons (OSDs). Instances (VMs) that are hosted on the project boot with two NICs: one dedicated to the storage provider network and the second to project-owned routers to the external provider network.
The storage provider network connects the VMs that run on the projects to the public Ceph storage network. The Ceph public network provides back-end access to the Ceph object storage nodes, metadata servers (MDS), and Controller nodes.
Using the native driver, CephFS relies on cooperation with the clients and servers to enforce quotas, guarantee project isolation, and for security. CephFS with the native driver works well in an environment with trusted end users on a private cloud. This configuration requires software that is running under user control to cooperate and work correctly.

1.1.2. CephFS through NFS
The CephFS through NFS back end in the Shared File Systems service (manila) is composed of Ceph metadata servers (MDS), the CephFS through NFS gateway (NFS-Ganesha), and the Ceph cluster service components. The Shared File Systems service CephFS NFS driver uses NFS-Ganesha gateway to provide NFSv4 protocol access to CephFS shares. The Ceph MDS service maps the directories and file names of the file system to objects that are stored in RADOS clusters. NFS gateways can serve NFS file shares with different storage back ends, such as Ceph. The NFS-Ganesha service runs on the Controller nodes with the Ceph services.
Instances are booted with at least two NICs: one NIC connects to the project router and the second NIC connects to the StorageNFS network, which connects directly to the NFS-Ganesha gateway. The instance mounts shares by using the NFS protocol. CephFS shares that are hosted on Ceph OSD nodes are provided through the NFS gateway.
NFS-Ganesha improves security by preventing user instances from directly accessing the MDS and other Ceph services. Instances do not have direct access to the Ceph daemons.

ce-client-access-CephFS-architect']
1.1.2.1. Ceph services and client access
In addition to the monitor, OSD, Rados Gateway (RGW), and manager services deployed when Ceph provides object and block storage, a Ceph metadata service (MDS) is required for CephFS and an NFS-Ganesha service is required as a gateway to native CephFS using the NFS protocol. For user-facing object storage, an RGW service is also deployed. The gateway runs the CephFS client to access the Ceph public network and is under administrative rather than end-user control.
NFS-Ganesha runs in its own container that interfaces both to the Ceph public network and to a new isolated network, StorageNFS. The composable network feature of Red Hat OpenStack Platform (RHOSP) director deploys this network and connects it to the Controller nodes. As the cloud administrator, you can configure the network as a Networking (neutron) provider network.
NFS-Ganesha accesses CephFS over the Ceph public network and binds its NFS service using an address on the StorageNFS network.
To access NFS shares, provision user VMs, Compute (nova) instances, with an additional NIC that connects to the Storage NFS network. Export locations for CephFS shares appear as standard NFS IP:<path>
tuples that use the NFS-Ganesha server VIP on the StorageNFS network. The network uses the IP address of the user VM to perform access control on the NFS shares.
Networking (neutron) security groups prevent the user VM that belongs to project 1 from accessing a user VM that belongs to project 2 over the StorageNFS network. Projects share the same CephFS file system but project data path separation is enforced because user VMs can access files only under export trees: /path/to/share1/….
, /path/to/share2/….
1.1.2.2. Shared File Systems service with CephFS through NFS fault tolerance
When Red Hat OpenStack Platform (RHOSP) director starts the Ceph service daemons, they manage their own high availability (HA) state and, in general, there are multiple instances of these daemons running. By contrast, in this release, only one instance of NFS-Ganesha can serve file shares at a time.
To avoid a single point of failure in the data path for CephFS through NFS shares, NFS-Ganesha runs on a RHOSP Controller node in an active-passive configuration managed by a Pacemaker-Corosync cluster. NFS-Ganesha acts across the Controller nodes as a virtual service with a virtual service IP address.
If a Controller node fails or the service on a particular Controller node fails and cannot be recovered on that node, Pacemaker-Corosync starts a new NFS-Ganesha instance on a different Controller node using the same virtual IP address. Existing client mounts are preserved because they use the virtual IP address for the export location of shares.
Using default NFS mount-option settings and NFS 4.1 or later, after a failure, TCP connections are reset and clients reconnect. I/O operations temporarily stop responding during failover, but they do not fail. Application I/O also stops responding but resumes after failover completes.
New connections, new lock-state, and so on are refused until after a grace period of up to 90 seconds during which time the server waits for clients to reclaim their locks. NFS-Ganesha keeps a list of the clients and exits the grace period earlier if all clients reclaim their locks.
The default value of the grace period is 90 seconds. To change this value, edit the NFSv4 Grace_Period
configuration option.
Chapter 2. CephFS through NFS Installation
2.1. CephFS with NFS-Ganesha deployment
A typical Ceph file system (CephFS) through NFS installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following configurations:
- OpenStack Controller nodes running containerized Ceph metadata server (MDS), Ceph monitor (MON), manila, and NFS-Ganesha services. Some of these services can coexist on the same node or can have one or more dedicated nodes.
- Ceph storage cluster with containerized object storage daemons (OSDs) running on Ceph storage nodes.
- An isolated StorageNFS network that provides access from projects to the NFS-Ganesha services for NFS share provisioning.
The Shared File Systems service (manila) with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For important recommendations, see https://access.redhat.com/articles/6667651.
The Shared File Systems service (manila) provides APIs that allow the projects to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver
, means that you can use the Shared File Systems service as a CephFS back end. RHOSP director configures the driver to deploy the NFS-Ganesha gateway so that the CephFS shares are presented through the NFS 4.1 protocol.
When you use RHOSP director to deploy the Shared File Systems service with a CephFS back end on the overcloud, director automatically creates the required storage network that is defined in the heat template. For more information about network planning, see Overcloud networks in the Director Installation and Usage guide.
Although you can manually configure the Shared File Systems service by editing its node /etc/manila/manila.conf
file, RHOSP director can override any settings in future overcloud updates. The recommended method for configuring a Shared File Systems back end is through director.
Adding CephFS through NFS to an externally deployed Ceph cluster, which was not configured by Red Hat OpenStack Platform (RHOSP) director, is supported. Currently, only one CephFS back end can be defined in director. For more information, see Integrate with an existing Ceph Storage cluster in the Integrating an Overcloud with an Existing Red Hat Ceph Cluster guide.
2.1.1. Requirements
CephFS through NFS has been fully supported since Red Hat OpenStack Platform version (RHOSP) 13. The RHOSP Shared File Systems service with CephFS through NFS for RHOSP 16.0 and later is supported for use with Red Hat Ceph Storage version 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions.
Prerequisites
- You install the Shared File Systems service on Controller nodes, as is the default behavior.
- You install the NFS-Ganesha gateway service on Pacemaker cluster of the Controller node.
- You configure only a single instance of a CephFS back end to use the Shared File Systems service. You can use other non-CephFS back ends with the single CephFS back end.
- You use RHOSP director to create an extra network (StorageNFS) for the storage traffic.
2.1.3. Isolated network used by CephFS through NFS
CephFS through NFS deployments use an extra isolated network, StorageNFS. This network is deployed so users can mount shares over NFS on that network without accessing the Storage or Storage Management networks which are reserved for infrastructure traffic.
For more information about isolating networks, see Basic network isolation in the Director Installation and Usage guide.
2.2. Installing Red Hat OpenStack Platform (RHOSP) with CephFS through NFS and a custom network_data file
To install CephFS through NFS, complete the following procedures:
- Install the ceph-ansible package. See Section 2.2.1, “Installing the ceph-ansible package”
-
Generate the custom roles file,
roles_data.yaml
, andnetwork_data.yaml
file. See Section 2.2.1.1, “Generating the custom roles file” -
Deploy Ceph, the Shared File Systems service (manila), and CephFS by using the
openstack overcloud deploy
command with custom roles and environments. See Section 2.2.2, “Deploying the updated environment” - Configure the isolated StorageNFS network and create the default share type. See Section 2.2.3, “Completing post-deployment configuration”
Examples use the standard stack
user in the Red Hat Platform (RHOSP) environment.
Perform tasks as part of a RHOSP installation or environment update.
2.2.1. Installing the ceph-ansible package
Install the ceph-ansible
package to be installed on an undercloud node to deploy containerized Ceph.
Procedure
-
Log in to an undercloud node as the
stack
user. Install the ceph-ansible package:
[stack@undercloud-0 ~]$ sudo dnf install -y ceph-ansible [stack@undercloud-0 ~]$ sudo dnf list ceph-ansible ... Installed Packages ceph-ansible.noarch 4.0.23-1.el8cp @rhelosp-ceph-4-tools
2.2.1.1. Generating the custom roles file
The ControllerStorageNFS
custom role configures the isolated StorageNFS
network. This role is similar to the default Controller.yaml
role file with the addition of the StorageNFS
network and the CephNfs
service, indicated by the OS::TripleO::Services:CephNfs
command.
[stack@undercloud ~]$ cd /usr/share/openstack-tripleo-heat-templates/roles [stack@undercloud roles]$ diff Controller.yaml ControllerStorageNfs.yaml 16a17 > - StorageNFS 50a45 > - OS::TripleO::Services::CephNfs
For more information about the openstack overcloud roles generate
command, see Roles in the Advanced Overcloud Customization guide.
The openstack overcloud roles generate
command creates a custom roles_data.yaml
file including the services specified after -o
. In the following example, the roles_data.yaml
file created has the services for ControllerStorageNfs
, Compute
, and CephStorage
.
If you have an existing roles_data.yaml
file, modify it to add ControllerStorageNfs
, Compute
, and CephStorage
services to the configuration file. For more information, see Roles in the Advanced Overcloud Customization guide.
Procedure
-
Log in to an undercloud node as the
stack
user, Use the
openstack overcloud roles generate
command to create theroles_data.yaml
file:[stack@undercloud ~]$ openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage
2.2.2. Deploying the updated environment
When you are ready to deploy your environment, use the openstack overcloud deploy
command with the custom environments and roles required to run CephFS with NFS-Ganesha.
The overcloud deploy command has the following options in addition to other required options.
Action | Option | Additional information |
---|---|---|
Add the extra StorageNFS network with |
| Section 2.2.2.1, “StorageNFS and network_data_ganesha.yaml file” |
Add the custom roles defined in |
| |
Deploy the Ceph daemons with |
| Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide. |
Deploy the Ceph metadata server with |
| Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide |
Deploy the Shared File Systems (manila) service with the CephFS through NFS back end. Configure NFS-Ganesha with director. |
|
The following example shows an openstack overcloud deploy command
with options to deploy CephFS through NFS-Ganesha, Ceph cluster, Ceph MDS, and the isolated StorageNFS network:
[stack@undercloud ~]$ openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml \ -r /home/stack/roles_data.yaml \ -e /home/stack/containers-default-parameters.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e /home/stack/network-environment.yaml \ -e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
For more information about the openstack overcloud deploy
command, see Deployment command in the Director Installation and Usage guide.
2.2.2.1. StorageNFS and network_data_ganesha.yaml file
Use composable networks to define custom networks and assign them to any role. Instead of using the standard network_data.yaml
file, you can configure the StorageNFS composable network with the network_data_ganesha.yaml
file. Both of these roles are available in the /usr/share/openstack-tripleo-heat-templates
directory.
The network_data_ganesha.yaml
file contains an additional section that defines the isolated StorageNFS network. Although the default settings work for most installations, you must edit the YAML file to add your network settings, including the VLAN ID, subnet, and other settings.
name: StorageNFS enabled: true vip: true name_lower: storage_nfs vlan: 70 ip_subnet: '172.17.0.0/20' allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::4', 'end': 'fd00:fd00:fd00:7000::fffe'}]
For more information about composable networks, see Using Composable Networks in the Advanced Overcloud Customization guide.
2.2.2.2. The CephFS back-end environment file
The integrated environment file for defining a CephFS back end, manila-cephfsganesha-config.yaml
, is located in /usr/share/openstack-tripleo-heat-templates/environments/
.
The manila-cephfsganesha-config.yaml
environment file contains settings relevant to the deployment of the Shared File Systems service (manila). The back-end default settings work for most environments. The following example shows the default values that director uses during deployment of the Shared File Systems service:
[stack@undercloud ~]$ cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml # A Heat environment file which can be used to enable a # a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by ceph-ansible # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../deployment/ceph-ansible/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: false 4 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'
The parameter_defaults
header signifies the start of the configuration. In this section, you can edit settings to override default values set in resource_registry
. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs
, which sets defaults for a CephFS back end.
- 1
ManilaCephFSBackendName
sets the name of the manila configuration of your CephFS back end. In this case, the default back-end name iscephfs
.- 2
ManilaCephFSDriverHandlesShareServers
controls the lifecycle of the share server. When set tofalse
, the driver does not handle the lifecycle. This is the only supported option.- 3
ManilaCephFSCephFSAuthId
defines the Ceph auth ID that director creates for themanila
service to access the Ceph cluster.- 4
ManilaCephFSCephFSEnableSnapshots
controls snapshot activation. Thefalse
value indicates that snapshots are not enabled. This feature is currently not supported.
For more information about environment files, see Environment Files in the Director Installation and Usage guide.
2.2.3. Completing post-deployment configuration
You must complete two post-deployment configuration tasks before you create NFS shares, grant user access, and mount NFS shares.
- Map the Networking service (neutron) StorageNFS network to the isolated data center Storage NFS network.
- Create the default share type.
After you complete these steps, the tenant compute instances can create, allow access to, and mount NFS shares.
2.2.3.1. Creating the storage provider network
You must map the new isolated StorageNFS network to a Networking (neutron) provider network. The Compute VMs attach to the network to access share export locations that are provided by the NFS-Ganesha gateway.
For information about network security with the Shared File Systems service, see Hardening the Shared File Systems Service in the Security and Hardening Guide.
Procedure
The openstack network create
command defines the configuration for the StorageNFS neutron network.
From an undercloud node, enter the following command:
[stack@undercloud ~]$ source ~/overcloudrc
On an undercloud node, create the StorageNFS network:
(overcloud) [stack@undercloud-0 ~]$ openstack network create StorageNFS --share --provider-network-type vlan --provider-physical-network datacentre --provider-segment 70
You can enter this command with the following options:
-
For the
--provider-physical-network
option, use the default valuedatacentre
, unless you set another tag for the br-isolated bridge through NeutronBridgeMappings in your tripleo-heat-templates. -
For the
--provider-segment
option, use the VLAN value set for the StorageNFS isolated network in the heat template,/usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml
. This value is 70, unless the deployer modified the isolated network definitions. -
For the
--provider-network-type
option, use the valuevlan
.
-
For the
2.2.3.2. Configuring the shared provider StorageNFS network
Create a corresponding StorageNFSSubnet
on the neutron-shared provider network. Ensure that the subnet is the same as the storage_nfs
network definition in the network_data.yml
file and ensure that the allocation range for the StorageNFS
subnet and the corresponding undercloud subnet do not overlap. No gateway is required because the StorageNFS
subnet is dedicated to serving NFS shares.
Prerequisites
- The start and ending IP range for the allocation pool.
- The subnet IP range.
Chapter 3. Verifying successful CephFS through NFS deployment
When you deploy CephFS through NFS as a back end of the Shared File Systems service (manila), you add the following new elements to the overcloud environment:
- StorageNFS network
- Ceph MDS service on the controllers
- NFS-Ganesha service on the controllers
For more information about using the Shared File Systems service with CephFS through NFS, see Shared File Systems service in the Storage Guide.
As the cloud administrator, you must verify the stability of the CephFS through NFS environment before you make it available to service users.
3.1. Verifying creation of isolated StorageNFS network
The network_data_ganesha.yaml
file used to deploy CephFS through NFS as a Shared File Systems service back end creates the StorageNFS VLAN. Complete the following steps to verify the existence of the isolated StorageNFS network.
Prerequisites
- Complete the steps in Chapter 2, CephFS through NFS Installation
Procedure
- Log in to one of the controllers in the overcloud.
Enter the following command to check the connected networks and verify the existence of the VLAN as set in
network_data_ganesha.yaml
:$ ip a 15: vlan310: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 32:80:cf:0e:11:ca brd ff:ff:ff:ff:ff:ff inet 172.16.4.4/24 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet 172.16.4.7/32 brd 172.16.4.255 scope global vlan310 valid_lft forever preferred_lft forever inet6 fe80::3080:cfff:fe0e:11ca/64 scope link valid_lft forever preferred_lft forever
3.2. Verifying Ceph MDS service
Use the systemctl status
command to verify the Ceph MDS service status.
Procedure
Enter the following command on all Controller nodes to check the status of the MDS container:
$ systemctl status ceph-mds<@CONTROLLER-HOST>
Example:
$ systemctl status ceph-mds@controller-0.service ceph-mds@controller-0.service - Ceph MDS Loaded: loaded (/etc/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-09-18 20:11:53 UTC; 6 days ago Main PID: 65066 (conmon) Tasks: 16 (limit: 204320) Memory: 38.2M CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@controller-0.service └─60921 /usr/bin/podman run --rm --net=host --memory=32000m --cpus=4 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro>
3.3. Verifying Ceph cluster status
Complete the following steps to verify Ceph cluster status.
Procedure
- Log in to the active Controller node.
Enter the following command:
$ sudo ceph -s cluster: id: 3369e280-7578-11e8-8ef3-801844eeec7c health: HEALTH_OK services: mon: 3 daemons, quorum overcloud-controller-1,overcloud-controller-2,overcloud-controller-0 mgr: overcloud-controller-1(active), standbys: overcloud-controller-2, overcloud-controller-0 mds: cephfs-1/1/1 up {0=overcloud-controller-0=up:active}, 2 up:standby osd: 6 osds: 6 up, 6 in
- Result
- There is one active MDS and two MDSs on standby.
To check the status of the Ceph file system in more detail, enter the following command and replace
<cephfs>
with the name of the Ceph file system:$ sudo ceph fs ls name: cephfs, metadata pool: manila_metadata, data pools: [manila_data]
3.5. Verifying manila-api services acknowledges scheduler and share services
Complete the following steps to confirm the manila-api
service acknowledges the scheduler and share services.
Procedure
- Log in to the undercloud.
Enter the following command:
$ source /home/stack/overcloudrc
Enter the following command to confirm
manila-scheduler
andmanila-share
are enabled:$ manila service-list | Id | Binary | Host | Zone | Status | State | Updated_at | | 2 | manila-scheduler | hostgroup | nova | enabled | up | 2018-08-08T04:15:03.000000 | | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2018-08-08T04:15:03.000000 |