Chapter 12. Native CephFS post-deployment configuration and verification
You must complete some post-deployment configuration tasks before you create CephFS shares, grant user access, and mount CephFS shares.
- Map the Networking service (neutron) storage network to the isolated data center storage network.
- Make the storage provider network available to trusted tenants only through custom role based access control (RBAC). Do not share the storage provider network globally.
- Create a private share type.
- Grant access to specific trusted tenants.
After you complete these steps, the tenant compute instances can create, allow access to, and mount native CephFS shares.
Deploying native CephFS as a back end of the Shared File Systems service (manila) adds the following new elements to the overcloud environment:
- Storage provider network
- Ceph MDS service on the Controller nodes
The cloud administrator must verify the stability of the native CephFS environment before making it available to service users.
For more information about using the Shared File Systems service with native CephFS, see Configuring the Shared File Systems service (manila) in the Configuring persistent storage guide.
12.1. Creating the storage provider network
You must map the new isolated storage network to a Networking (neutron) provider network. The Compute VMs attach to the network to access native CephFS share export locations.
For information about network security with the Shared File Systems service (manila), see Hardening the Shared File Systems Service in Hardening Red Hat OpenStack Platform.
Procedure
The openstack network create
command defines the configuration for the storage neutron network.
Source the overcloud credentials file:
$ source ~/<credentials_file>
-
Replace
<credentials_file>
with the name of your credentials file, for example,overcloudrc
.
-
Replace
On an undercloud node, create the storage network:
(overcloud) [stack@undercloud-0 ~]$ openstack network create Storage --provider-network-type vlan --provider-physical-network datacentre --provider-segment 30
You can enter this command with the following options:
-
For the
--provider-physical-network
option, use the default valuedatacentre
, unless you set another tag for the br-isolated bridge through NeutronBridgeMappings in your tripleo-heat-templates. -
For the
--provider-segment
option, use the value set for the Storage isolated network in your network environment file. If this was not customized, the default environment file is/usr/share/openstack-tripleo-heat-templates/network_data.yaml
. The VLAN associated with the Storage network value is30
unless you modified the isolated network definitions. -
For the
--provider-network-type
option, use the valuevlan
.
-
For the
12.2. Configuring the storage provider network
Create a corresponding StorageSubnet
on the neutron provider network. Ensure that the subnet is the same for the storage_subnet
in the undercloud, and that the allocation range for the storage subnet and the corresponding undercloud subnet do not overlap.
Requirements
- The starting and ending IP range for the allocation pool
- The subnet IP range
Procedure
From an undercloud node, enter the following command:
[stack@undercloud ~]$ source ~/overcloudrc
Use the sample command to provision the network. Update the values to suit your environment.
(overcloud) [stack@undercloud-0 ~]$ openstack subnet create \ --allocation-pool start=172.17.3.10,end=172.17.3.149 \ --dhcp \ --network Storage \ --subnet-range 172.17.3.0/24 \ --gateway none StorageSubnet
-
For the
--allocation-pool
option, replace thestart=172.17.3.10,end=172.17.3.149
IP values with the IP values for your network. -
For the
--subnet-range
option, replace the172.17.3.0/24
subnet range with the subnet range for your network.
-
For the
12.3. Configuring role-based access control for the storage provider network
After you identify the trusted tenants or projects that can use the storage network, configure role-based access control (RBAC) rules for them through the Networking service (neutron).
Requirements
Names of the projects that need access to the storage network
Procedure
From an undercloud node, enter the following command:
[stack@undercloud ~]$ source ~/overcloudrc
Identify the projects that require access:
(overcloud) [stack@undercloud-0 ~]$ openstack project list +----------------------------------+---------+ | ID | Name | +----------------------------------+---------+ | 06f1068f79d2400b88d1c2c33eacea87 | demo | | 5038dde12dfb44fdaa0b3ee4bfe487ce | service | | 820e2d9c956644c2b1530b514127fd0d | admin | +----------------------------------+---------+
Create network RBAC rules with the desired projects:
(overcloud) [stack@undercloud-0 ~]$ openstack network rbac create \ --action access_as_shared Storage \ --type network \ --target-project demo
Repeat this step for all of the projects that require access to the storage network.
12.5. Verifying creation of isolated storage network
The network_data.yaml
file used to deploy native CephFS as a Shared File Systems service back end creates the storage VLAN. Use this procedure to confirm you successfully created the storage VLAN.
Procedure
- Log in to one of the Controller nodes in the overcloud.
Check the connected networks and verify the existence of the VLAN as set in the
network_data.yaml
file:$ ip a 8: vlan30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 52:9c:82:7a:d4:75 brd ff:ff:ff:ff:ff:ff inet 172.17.3.144/24 brd 172.17.3.255 scope global vlan30 valid_lft forever preferred_lft forever inet6 fe80::509c:82ff:fe7a:d475/64 scope link valid_lft forever preferred_lft forever
12.6. Verifying Ceph MDS service
Use the systemctl status
command to verify the Ceph MDS service status.
Procedure
Enter the following command on all Controller nodes to check the status of the MDS container:
$ systemctl status ceph-mds<@CONTROLLER-HOST>
Example:
$ systemctl status ceph-mds@controller-0.service ceph-mds@controller-0.service - Ceph MDS Loaded: loaded (/etc/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2018-09-18 20:11:53 UTC; 6 days ago Main PID: 65066 (conmon) Tasks: 16 (limit: 204320) Memory: 38.2M CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@controller-0.service └─60921 /usr/bin/podman run --rm --net=host --memory=32000m --cpus=4 -v /var/lib/ceph:/var/lib/ceph:z -v /etc/ceph:/etc/ceph:z -v /var/run/ceph:/var/run/ceph:z -v /etc/localtime:/etc/localtime:ro>
12.7. Verifying Ceph cluster status
Verify the Ceph cluster status to confirm that the cluster is active.
Procedure
- Log in to any Controller node.
From the Ceph monitor daemon, enter the following command:
$ sudo podman exec ceph-mon-controller-0 ceph -s cluster: id: 670dc288-cd36-4772-a4fc-47287f8e2ebf health: HEALTH_OK services: mon: 3 daemons, quorum controller-1,controller-2,controller-0 (age 14h) mgr: controller-1(active, since 8w), standbys: controller-0, controller-2 mds: cephfs:1 {0=controller-2=up:active} 2 up:standby osd: 15 osds: 15 up (since 8w), 15 in (since 8w) task status: scrub status: mds.controller-2: idle data: pools: 6 pools, 192 pgs objects: 309 objects, 1.6 GiB usage: 21 GiB used, 144 GiB / 165 GiB avail pgs: 192 active+clean
NoteThere is one active MDS and two MDSs on standby.
To see a detailed status of the Ceph File System, enter the following command:
$ sudo ceph fs ls name: cephfs metadata pool: manila_metadata, data pools: [manila_data]
NoteIn this example output,
cephfs
is the name of Ceph File System that director creates to host CephFS shares that users create through the Shared File Systems service.
12.9. Verifying manila-api services acknowledges scheduler and share services
Complete the following steps to confirm that the manila-api
service acknowledges the scheduler and share services.
Procedure
- Log in to the undercloud.
Enter the following command:
$ source /home/stack/overcloudrc
Enter the following command to confirm
manila-scheduler
andmanila-share
are enabled:$ manila service-list | Id | Binary | Host | Zone | Status | State | Updated_at | | 2 | manila-scheduler | hostgroup | nova | enabled | up | 2018-08-08T04:15:03.000000 | | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2018-08-08T04:15:03.000000 |